For the last two years, many companies have treated AI governance as a policy problem. Write a few rules, publish an internal usage guide, ask legal to review a handful of use cases. Then, move on.
That approach is running out of road in Europe.
The EU AI Act changes the conversation because it does not ask whether your business is “doing AI" in some broad, abstract sense. It asks a much more practical question: what kind of AI are you building, buying, embedding, or using, and what level of risk does it create?
The law applies to public and private organizations inside and outside the EU if they:
place AI systems or general-purpose AI models on the EU market.
put AI systems into service.
use them in the EU.
For those who lived through it and maybe still have some PTSD, this sounds very similar to the GDPR data residency requirements. The EU AI Act distinguishes between providers, deployers, and providers of general-purpose AI models.
The compliance burden is spread across the value chain, not concentrated only on model builders
That matters because many businesses still assume EU AI regulation is mainly a concern for frontier model companies. It is not. If you are using AI in hiring, customer interactions, decision support, fraud detection, finance, content generation, or internal automation, you are already in the zone where governance needs to become much more disciplined. The EU framework is risk-based by design. Some practices are prohibited.
Some systems are classified as high-risk. Some uses trigger transparency obligations. General-purpose AI models have their own obligations. The business impact depends on the use case, the role your organization plays, and the evidence you can produce about how the system is governed.
The timing is also real now.
From Delinea’s identity security point of view, that is the most important shift to understand. The AI Act is not just a legal framework. It is a control challenge.
That may sound obvious, but many organizations still approach AI compliance as if it lives mainly in documentation: policies, disclosures, model cards, legal reviews, procurement questionnaires. Those things matter. But they are not enough on their own.
As AI becomes more embedded in business processes, and especially as it becomes more agentic, the harder question is no longer “Do we have a policy?” It is “Can we actually see what this AI is doing, what it can access, what identity it is acting under, what policies govern it, what record we will have afterward, and can we tie an AI agent or model back to a human owner?” Delinea’s internal identity security control-plane is built around that exact progression: visibility, posture, and control across human, non-human, and AI identities.
That framing fits the EU moment. The organizations that will struggle most are not necessarily the ones with the most ambitious AI programs. They are the ones with poor visibility.
AI is already showing up in embedded SaaS features, copilots, internal assistants, developer workflows, customer-facing content, and third-party tools that someone enabled six months ago and barely documented. If you cannot inventory those uses, you cannot classify them properly. If you cannot classify them, you cannot decide which ones create high-risk obligations, which ones trigger transparency rules, and which ones create downstream accountability you still own even when the model comes from a vendor.
The AI Act is very clear that deployers matter, not just providers.
Everyday AI is one reason transparency deserves more executive attention than it gets. Under the AI Act, providers and deployers of certain interactive or generative AI systems face obligations to inform people when they are interacting with AI, to make certain generated outputs detectable, and to disclose AI-generated or AI-manipulated deepfake-style content in scope.
For many businesses, that will hit faster and more broadly than the headline-grabbing high-risk category. Customer service bots, AI-generated media, public-facing content, and automated communications are all much closer to everyday business operations than many leaders realize.
High-risk AI is where the compliance burden gets heavier. The EU identifies certain uses in areas such as employment, education, law enforcement, migration, and access to essential services as high-risk, along with some AI systems embedded in regulated products.
Providers of those systems must complete conformity assessments and demonstrate compliance with requirements around risk management, data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity, and robustness. Deployers have their own obligations too, including using systems according to instructions, monitoring operation, assigning human oversight, and, in some cases, supporting a right to explanation for affected individuals.
That becomes even more important as businesses move from AI assistance to AI action. In the Delinea MCP model, AI agents use existing authentication mechanisms, rely on temporary access tokens, and generate identity-context logging, so actions remain traceable and auditable. Requests can be validated, policies applied, and responses controlled to reduce unintended disclosure. That is the kind of operating discipline businesses will need more of, not less of, as regulatory scrutiny increases.
The real issue is not whether AI can be governed in theory. It is whether businesses can govern AI in motion
This is where the Delinea point of view is different from a generic compliance checklist.
Can they see all relevant identities, including AI agents and non-human entities?
Can they understand what access exists today, and what could be accessed next?
Can they reduce overprivileged access before it becomes a regulatory and security problem?
Can they prove what happened during an AI-assisted or AI-triggered action?
Internally, Delinea already frames this as board-level readiness: visibility into human, machine, and AI identities; measurable reduction in overprivileged accounts; and sufficient confidence to accelerate AI initiatives without losing control of access risk.
That is also the business lens leaders should apply to the EU AI Act. This is not only about avoiding penalties, even though the penalty thresholds are significant. The Regulation sets thresholds up to €35 million or 7% of worldwide annual turnover for certain infringements, up to €15 million or 3% for other non-compliance, and up to €7.5 million or 1.5% for providing incorrect or misleading information.
The Commission can also fine providers of general-purpose AI models for non-compliance with their obligations. But the bigger issue for most organizations is operational: whether they can keep adopting AI without creating blind spots they cannot explain later.
So, what should businesses do now?
Start with visibility. Build a real inventory of AI use across the business, including third-party tools and embedded capabilities. Then classify those identities by role, geography, business impact, and likely regulatory category. Then focus on posture: where are my riskiest AI identities and how to risk them. After that, focus on control: who can access what, what policies apply, what data is touched, what gets logged, and how exceptions are handled.
The EU AI Act is pushing businesses toward a world where documentation still matters, but proof of controlled operation matters more.
That is the larger lesson here. EU AI regulation is not just raising the compliance bar. It is raising the control bar.
And in an environment where AI can increasingly act, not just answer; that makes identity security central to AI governance. From Delinea’s perspective, that is where the market is heading fastest: toward a model where AI innovation, access governance, and runtime authorization must work together. The businesses that understand this early will be in a much stronger position than the ones still treating AI compliance as a paperwork exercise.