Skip to content
 

What is Shadow AI and what is the risk to your organization?

  
What is Shadow AI and What is the Risk to Your Organization?
7:15

The proliferation of large language models (LLMs), ChatGPT, and other AI tools has seeped into every facet of business and personal life today, leading many security leaders to suspect that the rate of AI adoption at their organizations has far outstripped what's been authorized by their business leaders.

This phenomenon, known as shadow AI, creates new risks for organization security that no chief information security officer (CISO) can ignore.

What is Shadow AI?

Shadow AI is the unsanctioned use of AI tools by employees or departments with IT or security team oversight.

Shadow IT issues are not new to organizations. Think unauthorized cloud purchases, unmanaged bring your own device (BYOD) programs, and rogue access points. Shadow AI is a natural consequence of governance lag. Users will always innovate with new technology faster than governance and security people can work to establish sane policies and effective guardrails.

This is not necessarily a bad thing. The best workers are always motivated to use the best tools they can to be more productive—and the business benefits from that productivity. But without understanding and managing the risks attendant with bleeding-edge tech, the cons can quickly unravel the pros of productivity gains.

Shadow AI use cases could include developers 'vibe coding' by using LLM prompts to generate code or finance teams using unauthorized AI agents to expedite manual payroll processes. Shadow AI could be the result of unsanctioned use of unsecured AI tooling, or unapproved use of specific AI models or AI training data sets that could put data privacy or compliance at risk.

Delinea recently conducted a study of 300 U.S. CISOs and CIOs to better understand the prevalence and risk of shadow AI. The data shows that shadow AI is not a 'what-if' problem for tomorrow—it's an on-the-ground reality for the vast majority of organizations today. And for many of them, it's already linked to significant data breaches and compliance failures.

The prevalence of Shadow AI

The study shows that 85% of security stakeholders have either confirmed use or have some credible reason to believe that shadow AI applications and models are being used at their organizations. Breaking this down further, some 49% have confirmed cases. The remaining 36% say they think shadow AI use is likely.

89% of respondents say shadow AI is a high priority

The study showed that the most commonly used unauthorized AI tools were LLMs, reported by 85% of organizations, and AI code assistants, reported by 84% of organizations. Respondents reported an average of 6.8 uses of unapproved AI, with the median standing at 4. The lowest incidence was 2, and the highest was a whopping 76

Most striking was that among those who know or believe that there is unauthorized AI use in their organization, 64% reported that it has already led to a known data breach or compliance issue within their organization. Shadow AI issues are even more prevalent in medium-sized corporations, with revenue between $10 million and $50 million. An overwhelming 81% of these medium-sized firms with shadow AI use have already suffered breaches and compliance consequences as a result.

64% of stake holders reported shadow AI issues

What are the risks of Shadow AI?

According to our study, just over a third of respondents reported that they weren't fully prepared for shadow AI-related incidents. What are the risks of shadow AI?

The risks are numerous. There are risks from new application security vulnerabilities being introduced as unvetted AI apps interact with other assets, there's risk of data exfiltration, there's ransomware risks, there are copyright risks, there are risks of using unvetted training data that can skew the results of AI output, and there are privacy risks.

At its core, the biggest shadow AI risk is data risk. When unknown and unauthorized AI applications and models are used, it's hard to understand what data is being accessed, how the data generated is being used, or when and where data is being exfiltrated—whether for pernicious reasons or just to train an open-source AI model.

There are a lot of different ways for that risk to manifest itself, and a lot of it depends on the use case. If you're a developer who's using an AI tool, your risk profile and the potential damage will be different than a financial analyst using AI to parse through spreadsheets, or a marketing pro using AI to generate presentations.

But again, the common denominator for risk is the data accessed by those AI tools. Once that data is accessed, if it's an unsanctioned and unmonitored tool, you really lose control of where the data might end up.

Prepping for AI governance

The good news is that most of the respondents in our study are taking the risk of shadow AI seriously. Some 89% of respondents say that focusing on shadow AI is a high priority for their cybersecurity roadmap.

Veteran IT and security leaders know that simply blanket-banning the use of valuable technology like AI is not feasible. These leaders want to mitigate the risk of shadow AI without slowing progress. With shadow AI, the inability to say 'no' is even more pressing because the highest ranks of many businesses today have taken an 'AI first' approach to streamlining operations and business processes. Executives want to reap those gains from AI, and they want to do it quickly.

The task of security leaders is to look for ways to enable and empower teams to take advantage of the productivity boost of AI, without having to circumvent governance controls and becoming a breach statistic in the process.

To do this, companies need to lay the groundwork with solid governance. This governance should scope out the most valuable use cases of AI, understand what technology is available, identify the risks inherent to these different options, and offer guidance on the safest technology to use. More importantly, security and governance teams should establish guardrails and controls that manage AI access to IT assets and data based on data risk.

Keeping up with Shadow AI

The big challenge when it comes to detecting and enforcing these shadow AI governance policies is that the control mechanisms are still catching up. Prioritize risk and apply relevant existing governance standards. For example, if the biggest AI risks are data risks, then focus on access management, especially for privileged data.

Having strong controls around your data and access visibility can help organizations detect anomalous use or different patterns that could indicate shadow AI use. This can help security professionals and policymakers have discussions with users to encourage them to use sanctioned tools or potentially adjust policies depending on the business needs.

Shadow AI is here and poses a real risk for many organizations. But with the right tools and governance in place, security leaders can protect their firms from this emerging threat.

Find out how Delinea can help you discover, provision, and control all machine and AI identities.