Let's talk about the rise of AI agents in business applications.
AI agents are transforming how users within organizations interact with applications. Business applications, like Enterprise Resource Planning (ERP) systems, are the next frontier.
We’ve seen AI agents integrated into consumer applications like Microsoft Word and Visual Studio and now, vendors like Microsoft, Oracle, and SAP are embedding AI agents into their business applications, including Microsoft Dynamics 365, Oracle Fusion Cloud, NetSuite, and SAP S/4HANA.
The goal is to replace mundane or repetitive tasks with AI agents, so the end users of the applications who understand the inner workings of the business will have a ‘team’ of agents at their disposal to automate tasks, deliver insights, streamline operations, and make autonomous decisions for the end users.
Business applications manage critical financial and operational data, so the security stakes are high. This blog will help you understand how AI agents work in business applications with real-world examples, present the security challenges and risks these agents introduce, and provide a framework for securing and governing AI agents in your systems.
Artificial Intelligence (AI) agents are non-human entities performing business tasks, like ‘digital employees’. In contrast to a static, rule-based process like automation, an AI agent can reason, plan, and take actions based on information it’s given.
There are three parts to an AI agent: the Large Language Model (LLM) that handles reasoning, the memory that stores past interactions, and the tools that allow the agent to interact with the outside world to retrieve data, take action, or call other agents.
The Model Context Protocol (MCP) is an open-source standard, created by Anthropic and backed by OpenAI, Google, and others, that securely connects LLMs to external systems (files, APIs, infrastructure), giving enterprises a trusted bridge for AI agents to connect to different data sources. It’s like a USB-C for AI agents and the systems they access. Similar to the way you’ve used hard-coded logic to connect different systems and sources with APIs, you can use MCP to provide some structure and help you understand relationships and constraints.
You could also think of MCP as the ‘traffic cop’ guiding agent behavior and what systems agents can access.
Let’s take a look at a few examples of the native AI agents in Microsoft Dynamics 365 applications.
For a procurement specialist trying to ensure that critical supply arrives on time for production, the Supplier Communication Agent in D365 Supply Chain Management uses AI to automate multiple manual, repetitive vendor communications. This frees up purchasers to focus on more complex, higher-value tasks.
For an accounting manager responsible for making sure the company’s financial results are accurate when the company closes out a quarter or year, the Account Reconciliation Agent in D365 Finance transforms the financial close from a reactive, report-driven process into a proactive, intelligent workflow. It automatically identifies reconciliation exceptions and recommends corrective actions, helping accounting managers close faster with greater accuracy.
For a consultant submitting expenses while working on client projects, the Expense Agent for D365 Project Operations automates the end-to-end expense management process. Once the consultant forwards receipts to a shared mailbox or uploads them, the agent automatically extracts key details, groups them, and organizes them by project or trip, saving time and eliminating manual entry. This helps consultants efficiently and consistently submit and manage their expenses, enabling them to focus their attention on delivering results for their clients.
For a dispatcher who manages technicians' schedules, such as an air-conditioning repair technician at an HVAC company, the Scheduling Operations Agent in D365 Field Service uses AI-driven optimization to quickly adjust and improve individual technicians’ schedules. This reduces manual effort, minimizing travel time, and ensures higher-priority work is completed on time.
AI agents in business applications like D365 deliver clear benefits in speed and efficiency, automating repetitive workflows, and reducing manual effort for end users. However, they also introduce potential risks like data exposure, decision errors, and misconfigurations.
We are experiencing an explosion of AI agents: from software manufacturers embedding AI agents, to third parties selling their own agents to add to business applications, to the ability to design your own agents using tools like Microsoft Copilot Studio. Trusted vendors take security seriously, but when those agents talk to others that are less scrutinized, it breaks down the agentic AI security supply chain.
To manage these risks, organizations must establish a process to regularly review which agents are available in their business application environment and understand how they operate. AI is powerful, but its use must be balanced with careful evaluation and oversight.
Consider the security implications of the Supplier Communications AI Agent for D365 Supply Chain Management; you can watch a quick demo video here.
The Supplier Communications Agent helps you:
This means you need to grant the agent access to your emails. It’s important to evaluate whether you want the agent to read all your emails, including those that don’t pertain to your supply chain.
Think about what level of access the agent should have in this process. Do you trust the agent to update Purchase Orders (POs)? Are you confident that it will update the right amount on the PO?
Having human intervention is important from a security perspective, also known as a ‘Human in the Loop’ (HITL).
AI agents act like human users, but with different risk profiles. Similar to service accounts, AI agents typically run without human oversight and introduce unique security and governance challenges if not tightly controlled.
Elevated access privileges. AI agents often receive elevated access privileges by default to perform their tasks, with limited guardrails, because of the fear that such guardrails would inhibit their ability to complete tasks. This lack of scrutiny can increase exposure to sensitive data or system functions.
Lack of human accountability. Unlike human users, AI agents can’t be held accountable for their actions or explain why a decision was made. There needs to be a human in the AI workflow to review access levels and monitor activity.
Potential for misuse or misconfiguration. Granting AI agents excessive access to ERP, CRM, or HCM systems can expose sensitive data. This means bad actors can exfiltrate data that they wouldn’t otherwise know about or have access to.
The rise of AI agents takes away ‘security by obscurity’: the idea that if attackers don’t know how a system works or even if it exists, they will have a harder time breaching it. Now that AI agents have access to systems, if you don’t lock down access effectively, someone can craft a prompt, and the AI agent will serve up the data to them.
For example, McDonald’s implemented an AI chatbot to screen applicants during the hiring process. Security researchers found web-based vulnerabilities in the chatbot, including a weak password, that allowed them to query the company’s databases to access personal data about other applicants.
When it comes to AI agents, there is a governance gap: AI capabilities are advancing faster than compliance frameworks. Most organizations lack formal oversight or governance processes for AI in general, and specific processes and policies for securing AI agents are lacking, too.
The main principle to remember is to treat AI agents as users within your business application security model. Access should align with least privilege and zero trust principles, with human oversight and periodic reviews of privileges and actions—just as you would for human users.
1. Inventory: Maintain an accurate inventory of all users and AI agents, including the access they have in business applications and clear attribution of actions to specific agents for visibility and accountability.
2. Access control: Apply least privilege and zero trust principles. Avoid overprovisioning to protect sensitive information and business processes.
3. Monitoring and audit trails: Track all agent actions, including data accessed or modified, to support compliance and audit readiness. Include context behind why changes were made.
4. Lifecycle management: Assign unique identities to AI agents. Avoid shared service accounts to prevent hidden risks.
5. Human oversight: Keep humans ‘in the loop’ for accountability, ethical review, and secure operations of AI agents.
AI agents are powerful tools but must be governed. For more in-depth information on this topic, watch our webinar with MSDynamics World: Securing AI Agents in Microsoft Dynamics? Yep, These Need to be Secure Too!
Or, download our report: AI in Identity Security Demands a New Playbook.