It’s guardrails, not gates, that balance AI innovation and security
When AI can dynamically compose actions, traditional authorization models fall apart, putting your organization at risk of failing to meet security and compliance requirements.
AI agents present unique challenges because they can autonomously chain multiple functions together in ways that produce unexpected outcomes.
Companies are using AI models in new ways, and they're accessing actual live data. Some are skipping over important questions of identity security and risking data breaches and non-compliance.
Let’s explore some examples that highlight the risks and strategies to help you balance AI innovation with security requirements.
Three AI security risks, and an opportunity
Over the past few weeks, I’ve heard several stories about AI that I can’t stop thinking about.
Three of these stories highlight the risk of uncontrolled AI, and one demonstrates a reduction in risk when AI is carefully governed
- I spoke with a large university with a chatbot on its website. That chatbot was pulling sensitive, private data about students and applicants to answer their questions. This could be great for personalization and data training, right? But it can also break confidentiality, data privacy, and compliance requirements.
- A team of data scientists pushed an AI model to GitHub. They didn't realize that, somehow, a full machine backup had been in that model, and it had things like unhashed session tokens. Storing unhashed session tokens poses a severe security risk by allowing attackers to easily impersonate users and bypass authentication systems. If an attacker gains access to the token, they can take over an active session and act as the legitimate user.
- The CISO for a financial services company told me they looked across their dev and test environments and found 100 instances where an orphaned AI agent for an LLM is pulling live data for an application. The dev team had moved on to a different project, but that agent was still running within their environment. Without human oversight, the agent could exhibit unexpected behavior, and no one would know.
...but companies can leverage the potential of AI to improve productivity, while also reducing risk
Then I heard a happy story. One that demonstrates how companies can leverage the potential of AI to improve productivity, while also reducing risk.
A healthcare provider wanted to understand application performance needs so they could deploy the right-sized infrastructure for different use cases. An enterprising engineer created an AI bot that would be smart enough to check the performance of various parts of the application, understand the cost factors, and adjust automatically.
Brilliant.
But, there was an issue. There had to be a manual process for approving those changes because they required privileged access. So, that step really slowed everything down.
To become more efficient, they used Delinea’s Iris AI to automate those decisions by checking against established authorization policies. The result was low risk and high reward. They could save costs and improve productivity, while still meeting security and compliance requirements.
In the video below, I explain how Iris AI moves beyond authentication to analyze intent, need, risk, and asset sensitivity within sessions, enabling real-time, policy-driven authorization decisions.
IT can’t be the “Department of No”
The AI horse is already out of the barn, so to speak. We can’t lock the gates, but what we can do is put up guardrails that help teams use AI securely.
How do we do this?
Start with visibility
- Understand what AI is lurking within the environment.
- Discover what data sources the AI is pulling from and what infrastructure it’s connecting to.
- Right-size the identities that have access to the AI.
- Decide if they’re reduced to least privilege so the blast radius of an attack can be contained.
Then, once you know what you’re dealing with, govern the access.
A lot of people talk about authentication—the need for a person or machine (which is also an identity) to have the right to access an asset. Access alone is only part of the story. You also must manage authorization—what a user can do once they obtain that access. And, for comprehensive governance, you must also confirm authorization controls are working as expected.
That’s what Delinea Iris does.
Delinea Iris AI tells you what’s happening throughout a privileged session and flags issues for an administrator. For instance, it looks within the session itself to see what actions are being taken by individuals or non-human identities, machines, and scripts that issue commands, so you can understand if these activities are malicious.
It can even enable you to respond to malicious activities in an automated way, such as destroying the connection or maybe stopping access altogether.
Four questions determine if AI authorization can proceed
Before you authorize AI to autonomously or dynamically make decisions, be sure you ask these essential questions:
1. What's the intention of the AI?
So goes the old joke: If you ask three engineers how to fix a problem, you'll get five answers. That’s because there are multiple ways to do everything. You need to understand the purpose of the initiative to determine the best access option.
2. Is access necessary?
Does this person or machine identity need to have that access? Maybe they’re already qualified to have that access. Or maybe they’re not.
3. How risky is the activity?
Has risky activity or behavior been exhibited by this identity before? If so, you can utilize that history to make a smarter authentication decision.
4. How critical is the asset they want to access?
Is it benign, such as deploying a patch, or is it something like turning off the firewall commands and deleting the log files to cover your tracks? If you’re playing with VPC logs or internal approved firewall traffic, it’s probably not a huge deal. If you’re touching customer data, API keys and private keys, however, that's probably a bigger deal.
If you can combine those four elements together, you can make really smart decisions—automatically.
Do you have a story?
Every day, there are stories about people reaping the productivity benefits of AI. AI is a force multiplier. People are hungry to remove tedious, repetitive work and create new ways of moving their business forward.
AI security issues aren’t because of lack of expertise, or even resources. Things are moving so fast, data is exploding, and you just can't do it all. Delinea Iris AI can help you balance innovation, productivity, and security.
We’d love to hear what you’re trying to do with AI and see how we can support you.
