The narrative at Microsoft Ignite 2025 was clear. We are moving from passive AI consumption to active AI agency. For the business, this promises efficiency. For the security operations centre, it represents a massive expansion of the attack surface.
When an LLM merely generates text, the primary risk is misinformation or data leakage. When an agent is granted permission to execute code, modify databases, and send communications, the risk profile shifts dramatically. We are no longer dealing with hallucinations that embarrass us. We are dealing with hallucinations that can effectively execute a denial of service or a fraudulent transaction.
Here are the specific security vectors introduced by the shift to Agentic AI.
The Identity Crisis
The most immediate risk is identity management. Agents require permissions to function. In many implementations, these agents operate ‘on behalf of’ a user, inheriting their access tokens and privileges.
This breaks the principle of least privilege. If a user has access to sensitive HR data they rarely touch, their agent also has that access. The difference is that the agent can parse, index, and exfiltrate that data in milliseconds. We are entering an era where Non-Human Identity (NHI) management is not an edge case but the core of the defence strategy. We need granular scoping for agent tokens, distinct from the human user’s broad session access.
Indirect Prompt Injection as RCE
We have spent the last two years worrying about users tricking chatbots into saying forbidden things. With agents, prompt injection evolves into Remote Code Execution (RCE).
Consider an autonomous agent tasked with summarising invoices and updating a ledger. If a malicious actor embeds hidden instructions within a PDF invoice (white text on a white background), the agent ingests it. If that instruction tells the agent to ‘transfer the balance to account X’, and the agent has the API connectivity to do so, the attack is successful without the attacker ever breaching the network perimeter. The data input itself becomes the weapon.
The Runaway Worker
Windows 365 for Agents allows these entities to run strictly in the cloud, decoupled from a user session, 24/7. This introduces the risk of the ‘runaway’ agent.
Without strict heuristic monitoring, an agent caught in a logic loop could burn through cloud compute credits or flood internal communication channels. We need ‘circuit breakers’ for AI logic – automated mechanisms that freeze an agent’s permissions if its behaviour deviates from a standard baseline (e.g. accessing 500 files in a minute instead of 5).
Shadow Agency
Shadow IT is already a plague. Shadow Agency will be worse. With tools like Copilot Studio becoming more accessible, non-technical departments will begin spinning up custom agents to handle departmental tasks without IT oversight.
These ‘home-brewed’ agents will likely lack proper input sanitisation, logging, or access controls. They will sit on the network, authenticated, waiting to be exploited. The new Agent 365 control plane is Microsoft’s answer to this, but it is only effective if policies are enforced before the agents are deployed, not after.
The Verdict
The perimeter is gone. Identity is the new perimeter, and now half the identities on your network are synthetic. Security teams must pivot from securing the endpoint to securing the intent of the automated action.
Trust, but verify. Then verify again.