Securing generative AI: A strategic framework for security leaders

Pierre Mouallem
Adopting generative AI (GenAI) introduces big opportunities—and real risks. Security leaders must move quickly but carefully to make the most of GenAI without compromising trust, privacy, or compliance. A smart, identity-first strategy built on governance, technology controls, and adaptive security measures is key.
Securing GenAI isn’t a one-time fix—it’s a steady, evolving process. The right framework helps leaders stay focused, prioritize the right actions, and mature their security posture over time.
1. Build a clear and accountable AI governance structure
The first step in securing GenAI is strong governance. This means aligning AI use with company values, regulatory requirements, and ethical standards. Create a cross-functional governance group to guide projects, review tool usage, and track compliance with regional and global standards.
Set clear expectations for ethical AI use, and evaluate the broader impact of your AI systems—on your organization, your customers, and society. These insights can shape ongoing training to help teams use AI responsibly and stay ahead of emerging risks. Be ready to revise your governance model regularly to keep up with evolving technology and regulation.
2. Apply forward-looking technology controls
Security leaders need smart technical safeguards in place to defend GenAI systems. This includes robust logging to track how users interact with AI, and strong access controls to protect models and data.
Security reviews, cryptographic integrity checks, and regular audits help detect tampering and maintain trust in model outputs. It’s also critical to vet all third-party components—from libraries to external models—before integrating them.
To reduce the risks from unapproved tools, known as “shadow AI,” allow only sanctioned platforms and implement discovery processes to spot unauthorized use in real time. (Related reading: Shadow AI and the innovation paradox: Securing the future without slowing progress).
3. Tighten data access and control
AI systems are only as secure as the data they use. Apply identity-based security models that include just-in-time provisioning, MFA, and least-privilege principles.
Before using training data, remove sensitive content, classify and tag data appropriately, and protect it with encryption across its lifecycle. Real-time monitoring of AI inputs helps ensure outputs meet privacy, fairness, and ethical standards.
You’ll also need controls on what GenAI produces. Apply filtering, logging, encryption, and tagging to AI-generated content to ensure traceability and accountability.
4. Equip yourself with identity-first tools and capabilities
Securing GenAI starts with identity. A strong identity security stack helps manage the unique challenges of AI environments—especially the volume of non-human identities accessing sensitive data and systems.
Start with Privileged Access Management (PAM) to restrict and monitor high-level access to AI tools and infrastructure. Add a zero-trust approach built on intelligent authorization, just-in-time privilege elevation, and compliance tracking.
Your policies should be flexible. Use adaptive policy management to adjust in real time based on AI-generated insights and operational needs. And don’t overlook model integrity—implement checks to protect against manipulation or unauthorized updates.
Watch for common pitfalls
Security leaders aiming to protect GenAI must stay alert to common missteps. These include overlooking shadow AI, placing too much trust in AI outputs, leaving gaps in access controls, failing to evolve with new threats, and ignoring third-party risks.
With the right strategy, you can adopt GenAI confidently—balancing innovation with strong, adaptable security. Governance, smart controls, and identity-first protections give you the foundation to move forward without sacrificing trust or compliance.
Download our whitepaper, Securing the Use of Generative AI Technologies, and take the first step towards securing your GenAI initiatives with an identity-first approach.
