Delinea Blog > Are identity management solutions ready for a digital AI-based workforce?

Are identity management solutions ready for a digital AI-based workforce?

Published November 2025
Read time 7 minutes
What you will learn
Adopting Agentic AI means addressing three unique challenges posed by these new agents. Here they are, and what to do about them.

Nearly every enterprise across all industries is experimenting with autonomous AI agents and enabling its workforce to leverage AI assistants.

These AI Agents aren’t just chatbots; they are both delegated assistants and autonomous agents with either delegated authority or real independent credentials.

While most are focused on gaining a competitive edge in business, identity risks are increasing because the IAM controls we designed for human workers aren’t designed to handle these new AI agents operating as an “army of digital interns.”

85% of organizations have begun implementing AI resulting in a 35% increase in productivity

KPMG recently reported that 85% of organizations have begun implementing AI, and those that have deployed report a 35% increase in productivity on average. Simultaneously, nearly every software vendor is adding AI capabilities to their products, from Office 365 Co-Pilot to ChatGPT Enterprise, as well as a new breed of AI-enabled browser from OpenAI, Perplexity and Strawberry.

The challenge is that Identity Management solutions have traditionally focused on humans as the primary subject needing an identity and authorization to access a resource. Several solutions have been updated to manage machine identities used to access resources and data. However, organizations adopting Agentic AI will need to address the unique challenges posed by these new agents. Identity Management solutions weren't originally designed with autonomous AI agents in mind.

Agentic AI will typically access resources in one of two ways: either by using delegated user tokens (on behalf of a human) or by using its own machine credentials. From an identity perspective, these two classes of AI agents behave very differently. An AI assistant must act through the identity of the human it supports, while an autonomous AI agent operates under its own distinct identity with its own permissions to applications and data.

Several of the capabilities of traditional Identity Management need to be reviewed, improved, or more stringently enforced to address these three new challenges.

1. How do you onboard the new AI-based digital workforce?

Onboarding a new employee to an organization is a well-managed process that typically starts with the HR department using its Human Capital Management (HCM) solution to track a new hire or contractor’s employment lifecycle with the organization.

An employee’s digital identity typically begins with the new hire event from the HCM being delivered to the Identity Governance and Administration (IGA) solution, which creates the initial identity and provisions access to applications and services based on the roles assigned to each employee.

Since autonomous AI agents don’t really have an equivalent HCM for tracking the lifecycle for the agent, that duty will fall on the IGA solution to take on this identity lifecycle management responsibility. This lifecycle management will need to handle both the discovery of existing agents and the registration of new agents before assigning a unique identity.

Unique identities are important to ensure accountability for all access by these agents. The SPIFFE project presents a good model to follow with SVIDs as unique identifiers. The onboarding process will also need to identify and track the agent's business owner, similar to tracking the hiring manager for an employee, to enable proper performance and access reviews. This process must also enable administrative oversight throughout the agent's lifetime. 

With widespread experimentation lacking proper oversight already underway, the first step is to discover any AI agents or LLMs that might be in use. Delinea Continuous Identity Discovery (CID) provides the tooling necessary to continuously discover AI agents in your networks.

Another aspect of onboarding autonomous AI agents that we should not overlook is how they are governed once they have an identity. Like new employees, these agents will use their own identities to access business information of varying sensitivity and take actions on behalf of the organization, so they must be brought under the same policy and control framework.

IGA solutions will need to extend their normal joiner/mover/leaver processes to cover both AI identities and human ones. That means treating an autonomous AI agent as a managed account: registering it, assigning the right roles and entitlements, applying security and usage policies, and periodically reviewing and recertifying its access rather than "training" the model itself in security awareness.

The final aspect of lifecycle management is the “leave” process. For an AI agent, it may be as simple as a termination by an administrator or a manager. It can feel like the identity was abandoned if there is no notification of the event, so IGA solutions should extend their integrations with discovery solutions to ensure that they terminate any accounts abandoned by an Agent.

2. How do you authorize the agent to access resources?

Traditional machine or non-human identity use cases follow a pattern of predictable access, leveraging non-human or service accounts to access resources to perform a repetitive task. Agentic AI is entirely different and more human-like, as these agents operate at machine speed and make autonomous decisions to perform a wide range of tasks.

Since these agents will access applications and data, privileges must be tightly controlled and continuously monitored to ensure compliance with business policies and to minimize risk.

The first aspect of access management to consider is the type of credentials used to access resources that are increasingly under cyberattack. Whenever possible, AI agents should use short-lived credentials to reduce the risk of credential leaks that lead to data breaches.

If the access requires traditional credentials, make sure the agent checks them out of a PAM Vault so the credentials can be rotated on check-in. Delinea has recently released an open source MCP Server to enable Agentic AI access to Secret Server for several operations, from administrative tasks to secret access.

Enforcing least privilege in real time is essential to ensure appropriate guardrails are in place to limit what the AI Agents can access. That access must be continuously monitored to guard against suspicious activity. The adoption of least privilege starts with establishing a baseline of privileges granted and contrasting that with the actual privileges used by the agent.

Delinea Privilege Control for Cloud Entitlements (PCCE) provides continuous monitoring of privileges granted, as well as those used to work out the appropriate level of privileges for the agent. Additionally, Delinea provides session controls that monitor all access and use Iris AI to analyze sessions for any suspicious activity requiring human oversight.

3. How do assistants operate on behalf of an employee?

Business users are increasingly using AI assistants to enhance their productivity and efficiency. A recent Wharton study reports that 75% of enterprises are seeing a positive ROI. This is also driving many organizations to license AI Assistants for their employees as a driving force of their competitive edge in the market.

75% of enterprises are seeing a positive ROI when using AI assistants to enhance productivity and efficiency

OpenAI reports that over 1 million business customers worldwide are already using ChatGPT for work or using its developer platform. This adoption of AI for employee use needs to be properly managed to prevent an increase in identity risk.

Organizations adopting AI Agents for their employees should strive to maintain identity governance over these new agents in order to associate an agent's activity with the employee using these new tools. It is especially important to be able to clearly distinguish the access activities to enterprise applications and data between employees and their AI agents.

While there are many user-initiated tools that enable AI Agents to impersonate a user and work within an application session, there are other AI-enabled tools that require the user to provide a session or credentials to enable access. One solution to this challenge may be to adopt one of the core principles that PAM tools have used to enable IT admins to separate their administrative duties from their business activities.

This happens using an independent admin (or "dash-a" based on the common naming form of firstname.lastname-a@company.com) account. Those accounts are isolated enough to greatly reduce phishing exposure for admin accounts, since they can’t access the internet or send/receive email.

Adoption of this approach would enable a user to create multiple associated agent accounts for these new AI assistants. Existing IAM tooling can easily help to manage these new identities, though they may need to expand their self-service interfaces to enable business users the ability to more seamlessly create, monitor and terminate these new Agentic AI identities.

Once an organization can clearly distinguish the identities between its employees and these new AI agents, continuous monitoring should be used to ensure proper human oversight and approval where needed. Delinea provides session monitoring with AI-driven analysis to continuously lo identify suspicious or anomalous activity that may require human oversight.

The way forward: Identity-first security for AI agents

Agentic AI adoption is inevitable, but security doesn’t have to slow innovation.  Several identity and privileged access best practices can be used to help control and monitor Agentic AI usage:

  1. Continuous AI agent discovery
  2. Implement identity-first governance:
    Treat every AI agent as a first-class digital identity. Assign unique identifiers, integrate them into IAM/IGA systems from day one, and ensure clear sponsorship or ownership.
  3. Move from static secrets to dynamic credentials:
    Provision short-lived tokens and adopt just-in-time access. Eliminate shared credentials and rotate or retire secrets automatically.
  4. Enforce least privilege and scope authorization:
    Use least privilege to grant only the permissions needed for a specific task, and refine access through continuous monitoring.
  5. Adopt continuous monitoring and risk-based remediation:
    Use AI-powered analytics to detect anomalies, require human approval for sensitive actions, and maintain detailed audit trails.
  6. Educate and involve humans:
    Train employees and AI agents on security best practices, require ethical AI training and ensure human-in-the-loop oversight for highly sensitive tasks.

The digital workforce is expanding beyond human employees. 

With AI agents operating like interns, administrators, and even system owners, organizations must extend identity and privileged access controls to these nonhuman participants.  By adopting identity-first security, least privilege, just-in-time access, and continuous monitoring, companies can unlock the productivity benefits of AI while safeguarding their data and maintaining trust.

For organizations seeking guidance, our own identity security platform applies these principles across humans, machines and AI agents, delivering discovery, least privilege enforcement, risk-based response and auditability out of the box. 

It’s time to prepare your IAM/IGA program for a digital workforce that includes an “army of agents.”