Delinea | Privileged Access Management Blog

HITL and TTT: Applying Gartner's Time to Trust to Identity

Written by Tony Goulding | Jan 29, 2026 1:00:00 PM

Agentic AI is advancing faster than most organizations' ability to explain why they trust it.

Boards are approving investments. Product teams are embedding agents into workflows. Security leaders are being asked to sign off on systems that can make decisions and take action with minimal human involvement.

Yet when trust comes up, the conversation often stalls. Trust feels subjective, difficult to quantify, and hard to defend when things go wrong.

Gartner's introduction of Time to Trust (TTT) shifts that conversation. It treats trust not as a belief but as a measurable outcome. From an identity security perspective, that shift matters because identity controls are where autonomy is either earned or constrained in practice.

This blog takes an explicit identity security perspective. Gartner's analysis applies broadly to agentic AI across products and industries. Here, that framework is interpreted through identity, access, and governance, as these are the areas where trust decisions are enforced, audited, and explained.

Who cares about agentic AI?

This is written for CISOs, CIOs, security architects, and leaders responsible for AI governance who are accountable for enabling agentic AI without sacrificing control, visibility, or credibility.

The scenario Gartner describes is already unfolding. Agentic AI systems are being deployed with some level of human-in-the-loop (HITL) oversight. This dependence is not accidental; it reflects the current, early-stage maturity of AI security and governance. Over time, organizations expect oversight to decrease as confidence grows. The problem is that most organizations cannot clearly explain when, why, or how that transition should occur.

Gartner's case-based research found that customer trust is the number one inhibitor to agentic AI adoption. Yet nearly all vendors interviewed failed to demonstrate a structured way to measure or improve that trust.

Trust is often conflated with model accuracy, vendor reputation, or informal comfort gained over time

None of those holds up well during audits, incidents, or board scrutiny. Gartner’s research suggests that without a way to measure and improve trust, agentic AI adoption will stall at the early majority, roughly 16%, over the next several years.

From an identity security standpoint, this creates a familiar risk pattern. Access decisions are increasingly automated, but the criteria for removing human approval are vague. Shadow AI emerges as teams bypass friction. After an incident, organizations can explain what the agent did, but not why it was allowed to act autonomously at that moment.

Gartner's TTT framework addresses this gap by making trust measurable and progressive rather than assumed. Gartner positions TTT not only as a governance concept but also as a product performance metric that helps leaders assess readiness, scale adoption, and demonstrate value to the business.

Impacts and resolutions using modern identity security technology

Gartner defines Time to Trust as the time it takes to reduce reliance on HITL oversight to an acceptable, predefined threshold, using the decrease in HITL as a proxy for increasing trust. That threshold does not assume zero human involvement.

Gartner is explicit that full autonomy is rarely the goal, and that many agentic workflows will always retain some level of human oversight.

This distinction matters deeply for identity security.

Impact 1: Trust is treated as binary when it is not

Many organizations implicitly treat AI trust as an all-or-nothing proposition. Either every action requires approval, or the system is deemed safe enough to run freely. Gartner's research shows that this framing is a reason why adoptions can stall.

Identity systems already operate on graduated trust. Just-in-time access, risk-based decisions, and policy thresholds all assume that trust is contextual and reversible. TTT aligns naturally with this model by allowing organizations to define acceptable HITL thresholds per workflow rather than per technology.

Impact 2: Human oversight becomes a bottleneck or a liability

Gartner observed that enabling HITL by default increases confidence during early adoption, but that reliance on HITL diminishes as trust grows. The risk is not HITL itself but unmanaged HITL. As TTT improves, the human role does not disappear; it evolves. It shifts from approving every decision to orchestrating policies, exceptions, and guardrails that shape how AI operates at scale. Too much slows the business. Too little removes accountability.

From an identity security perspective, this is where authorization matters. Systems must make it easy to toggle HITL at specific decision points and to record when and why that oversight was removed. Without that, organizations shorten TTT informally, often without realizing it.

Impact 3: Trust is not revisited as conditions change

Trust degrades as well as improves. Gartner notes that factors such as use case complexity, risk appetite, accuracy, and repeatability influence TTT. Identity environments change constantly. Permissions sprawl. Context shifts. Threat levels rise.

Static approvals create false confidence. Identity threat detection, continuous entitlement analysis, and discovery of new identities, AI agents, and LLMs are essential to lengthening TTT again when risk increases.

Resolution: Applying TTT through identity security controls

Taking Gartner's Time to Trust framework seriously means operationalizing it where trust decisions are enforced. Crucially, this does not begin with autonomous AI agents requesting access. TTT maturity progresses gradually, starting with AI systems that assist in evaluating human-initiated access requests and only later extending those trust principles to non-human identities as governance and confidence mature.

In the early stages of this journey, the most practical and defensible application of TTT is to utilize AI to analyze, validate, and increasingly automate access requests submitted by humans, while maintaining clear accountability. This stage establishes whether an organization can trust AI to make consistent, explainable authorization decisions.

In TTT terms, reducing reliance on human approval for these decisions, rather than eliminating the human requester, becomes the first measurable signal that trust is being earned.

Delinea's Authorization powered by Iris AI illustrates this pattern in practice. Today, Iris AI applies AI-driven analysis to human access requests, validating user-provided justification against help desk tickets and contextual signals.

You can configure Iris AI to either defer final approval to a human reviewer or automatically grant or deny access for defined scenarios. Notably, Iris AI operates within strict authorization boundaries, evaluating access only as explicitly requested, rather than dynamically expanding privileges.

As confidence grows, you can allow Iris AI to both validate the request and make the authorization decision autonomously for low-risk use cases. Crucially, this trust is reinforced through feedback loops, where administrators can validate or correct authorization outcomes, allowing the system to improve decision quality over time rather than treating trust as static.

In TTT terms, the reduction in required human approvals serves as a measurable proxy for increased trust in the authorization system itself, rather than blind faith in automation.

This approach mirrors Gartner's recommendation to make human-in-the-loop controls easy to enable, disable, and measure. It also establishes the foundation required for future TTT maturity.

The same mechanisms used to build trust in AI-assisted authorization for human requests, such as explainability, policy enforcement, auditability, and rollback, will also be required to govern non-human and AI-initiated actions. In that sense, human-centric authorization is not a limitation of TTT maturity, but a prerequisite for extending trust further.

Supporting identity security capabilities matters just as much. Continuous Identity Discovery (CID) helps surface AI agents and service identities that would otherwise bypass governance. Cloud Infrastructure Entitlement Management (CIEM) capabilities, such as Delinea Privilege Control for Cloud Entitlements (PCCE), help ensure agents are not overprivileged, which artificially accelerates TTT at the cost of increased risk.

Identity Threat Detection and Response (ITDR) capabilities from Delinea Identity Threat Protection (ITP) provide behavioral signals that inform whether trust should be increased or decreased.

Three common misconceptions are worth calling out.

  1. First, trust is not equivalent to accuracy. Gartner explicitly notes that trust involves human dynamics, not just correctness. An accurate model with excessive permissions is still untrustworthy.
  2. Second, SaaS platforms do not absolve organizations of the responsibility for making trust decisions. Gartner found that almost no vendors are measuring TTT today. Trust remains the customer's responsibility, especially at the identity layer.
  3. And third, removing HITL is not equivalent to achieving maturity. Gartner's research shows that acceptable HITL thresholds vary by use case, and zero HITL is often undesirable.

A non-obvious insight from Gartner's work is that measuring TTT improves product and control design, not just adoption

Shorter TTT workflows reveal which guardrails work. Longer TTT workflows highlight where complexity or risk perception needs to be addressed. Identity platforms that expose these signals can turn trust into an engineering discipline rather than a debate.

Looking forward, this suggests how identity platforms must evolve. TTT will require tighter integration between identity governance, authorization, risk signals, and auditability. Trust metrics will increasingly sit alongside ROI and efficiency metrics in executive dashboards, exactly as Gartner recommends.

Summary and call to action

Agentic AI adoption is no longer limited by ambition or funding. It is limited by trust. Gartner's Time to Trust framework provides a method for measuring, defending, and improving trust deliberately.

From an identity security viewpoint, TTT is not abstract. It is evident in how access is granted, how oversight is minimized, and how quickly you can justify your decisions when challenged. Identity controls are where trust becomes enforceable.

If you are responsible for AI governance or identity security, the immediate question is not whether you trust agentic AI. It is whether you can measure that trust today.

To see how this can work in practice, learn more about Delinea's Authorization powered by Iris AI and how it supports a controlled transition from human approval to governed autonomy. As a next step, evaluate whether your current identity security stack can surface, measure, and adjust Time to Trust as agentic AI adoption accelerates.