Skip to content
 

Essential AI questions for a comprehensive vendor security assessment

  
Essential AI Questions for Vendor Security Assessments
8:41

As CISO of Delinea, I'm responsible for making sure any technology we purchase aligns with our cybersecurity standards. Secure use of AI is top of mind when we conduct vendor risk assessments, just as it is for customers who evaluate Delinea.

At Delinea, we make sure we have very clear explanations of the guardrails we’ve implemented for the use of AI in our products.

We expect the same from our vendors.

As an industry, we’re all trying to figure out the best way to confirm AI meets ethical considerations, security best practices, and regulatory requirements. With those goals in mind, we’ve been adding AI governance steps to our procurement process and sharing what we’ve learned with other security leaders.

I hope that hearing about our journey is helpful to you as you develop your own AI security assessments.

Jump to the essential AI vendor questions

When to conduct AI security assessments

Our governance team divides AI use into two buckets:

  • AI capabilities that are offered in Delinea’s identity security solutions
  • AI features and capabilities available for internal use

We aren't an AI company; we’re a cybersecurity company. We're not developing AI models ourselves. Instead, we’re leveraging well-established models from leading AI companies to improve and augment the way Delinea products function. Being transparent in how AI models and capabilities are used within our products is paramount for instilling trust with our customers.

For software we license for internal use, many vendors are adding AI capabilities to existing products. Therefore, in addition to asking about AI in the initial sales process, it’s also essential when renewing or expanding contracts.

Key questions we ask any AI vendor

Whether AI is used in our products or by one of our business functions, we’ve set up a security assessment process every Delinea vendor must follow.

The assessment rigor is based on the use case. If an AI tool or capability doesn't access any sensitive data and doesn't integrate with any other business applications, then the review process is pretty simple. Otherwise, it’s comprehensive and in-depth to ensure that the business fully understands the risks associated with using such a capability and can make an informed decision.

Some software vendors are well prepared for these types of questions, depending on the company's size, age, and maturity. Others have responsible use statements that we can refer to for guidance. And in some cases, it requires a conversation.

Here are the critical security concerns we review with our vendors:

  1. Use of AI features must be an opt-in: We don't want to use AI capabilities unless we explicitly opt in. It can't be a situation where you pop up a banner for a user, and all of a sudden, AI is enabled. Sometimes vendors turn on AI in a somewhat stealthy way. That's a big concern, because customers may end up with capabilities that they have very little understanding of what they are or what they do.

  2. AI needs to be well documented: I don't need to understand the inner workings of a vendor's AI model. I do, however, need to understand what we're buying and how it will be used. What I expect is high-level documentation on AI capabilities and security controls.

  3. Calculate immediate and long-term costs: Remember the early days of ride-sharing companies when rides were extremely cheap because they just wanted to get people on the platform? They weren't making money. They were losing money. It will be the same thing with AI. Right now, it's early days, but AI is expensive, and it's going to get monetized. Even if it's initially offered for free, that won't continue to be the case.

    We need to understand the cost structure, and if it's not well-defined in the long term, we must at least understand how long the current structure will be maintained. Consider the indirect or downstream costs of using AI. For example, with AI, you're generating a lot more data, a lot more transactions, and that might impact other costs (e.g., ingestion costs). I would expect the true cost of AI not to materialize for another year or two. As a buyer, getting ahead of that is crucial. You don't want to start using something, become dependent on it, only to realize that it's exorbitantly costly for long-term usage.

  4. Confirm what data is used to train the AI model: We want to make sure there is no model sharing. A vendor may do this to cut costs or improve the output, ultimately to provide more value to customers. I get it—you want to train on the data. There's no way around it. But how you go about doing that is really the heart of the matter. A vendor could say one thing, and behind the scenes, be doing all kinds of ancillary stuff that could get you in trouble.

    There must be guardrails in place for me to be comfortable letting a vendor train on my data. I want to ensure that the model instance is dedicated to me, that the training being used is specific to me, and that it will not be used elsewhere. I want to make sure that the output is properly monitored and contained. And I need to understand the value added.

  5. How the vendor treats AI in their own organization: Understanding the governance framework for AI use within a vendor company will tell you a lot about how they're going to use AI within the products you're buying. If there's very little control around privacy, data access, etc., that's a red flag.

    For example, you don't want the contract that you have with that vendor to be consumed by an internal AI tool and suddenly be available on the internet. On the flip side, if a vendor says, “Well, no, we don't allow AI internally," that's another red flag. They can't expect their customers to use it when they are afraid to use it themselves.

Sample questions for your AI vendor questionnaire

Below are some AI-related security and privacy questions to include in the procurement process.

  1. Does the AI/ML algorithm process any personal information?
  2. Are you considered a "data controller" for any of the personal data that is used for input or generated for output? Please explain.
  3. Please list all elements of personal information that are processed for each AI/ML algorithm. Please provide the AI documentation and information you normally share with your customers.
  4. Does your service use or integrate with generative AI provider(s)?
  5. Please provide the name, version, and type (e.g., neural network, decision tree) for each generative AI model.
  6. Do you transmit or ingest data to an external Large Language Model (LLM)?
  7. Is the external LLM integrated by an API or is it public?
  8. Will you use any of our data (e.g., input, prompts, output) to train any AI model, including fine-tuning?
  9. Do we have a choice on whether or not our data is used for training purposes?
  10. Are there any recommendations for further testing, validating, or monitoring the AI tool's output?
  11. Does the AI model include capabilities to track and report its performance against predefined metrics?
  12. What is your retention period for data entered into the AI tool (i.e., inputs, prompts)? And for data generated by the AI tool (i.e., output)?
  13. Do you retain any usage rights related to input/prompts or output?
  14. Do you share any input or output data with third parties?
  15. How are users made aware that they are interacting with or using the AI tool (e.g., chatbot)?
  16. Has the AI model been audited by an independent third party?
  17. Who will own the IP in any content, materials, or other outputs generated by the AI tool?
  18. Is there a mechanism for flagging and reporting issues related to bias, discrimination, inaccuracies, or poor performance of the AI tool?
  19. Does the AI tool permit continuous monitoring and logging of entered prompts to create a record?

Navigating the complexities of AI in vendor security assessments requires diligence, transparency, and collaboration. At Delinea, we’re committed to setting a high standard for AI governance, ensuring that our practices not only meet but also exceed industry expectations. By sharing our insights and experiences, we aim to empower other security leaders to make informed decisions that align with ethical considerations, security best practices, and regulatory requirements.

As AI continues to evolve, staying proactive and informed is key to leveraging its potential while reducing risk.