Microsoft has unveiled a new support document that outlines the transformation of Windows 11 into an agentic operating system. The company envisions a future where Windows is inherently AI-native, emphasizing agentic capabilities that empower the PC to autonomously manage tasks on behalf of the user. According to Microsoft, “Windows is committed to making agentic experiences with apps more productive and secure for individuals and enterprises.” As part of this initiative, a new experimental feature known as agent workspace will soon be available in a private developer preview for Windows Insiders. This phased approach aims to gather feedback while enhancing foundational security.
Understanding Agent Workspaces
The agent workspace feature allows AI to run applications in parallel with the user, akin to a PC configured with multiple user accounts. Microsoft asserts that “these workspaces are designed to be lightweight and secure, with memory and CPU usage scaling based on activity.” This setup is touted as more efficient for common operations compared to a full virtual machine like Windows Sandbox, while still ensuring security isolation and supporting parallel execution.
Microsoft emphasizes that security remains paramount in the development of agentic AI experiences on Windows 11. The company states, “Agent workspaces represent a key step in enabling intelligent, agent-powered computing. Security in this context is not a one-time feature — it’s a continuous commitment.” As these features evolve, so too will the security measures, adapting through various rollout phases from preview to widespread availability.
Core Security Principles
To ensure the integrity of agentic OS experiences, Microsoft has identified three core pillars of security:
- Non-repudiation: All actions taken by an agent must be observable and distinguishable from those executed by a user.
- Confidentiality: Agents that handle protected user data must meet or exceed the security and privacy standards of the data they utilize.
- Authorization: Users must approve all queries for their data and any actions taken by agents.
Additionally, Microsoft has outlined essential security and design principles for AI agents operating on Windows:
- Agents are autonomous entities, vulnerable to attacks like any other user or software component, and their actions must be contained.
- Agents must generate logs detailing their activities, which Windows should verify through a tamper-evident audit log.
- Users should have the ability to supervise agent activities, reviewing and approving plans that involve multiple steps.
- Agents must operate under the principle of least privilege, with permissions that do not exceed those of the initiating user.
- Entities within the system should not have special access to an agent beyond that of the owner it represents.
- Windows will support agents in processing data for clearly defined purposes, ensuring transparency and trust in line with Microsoft’s commitments.
Microsoft’s commitment to integrating agentic AI experiences into Windows 11 is evident. Applications and services that leverage these capabilities will be required to adhere to strict guidelines, ensuring compliance across the platform. Each agentic capability will function within its own AI workspace, separate from the human user, thus safeguarding data and maintaining control over tasks.
As a notable first step, Microsoft has announced that Copilot Actions will be among the initial AI applications to utilize these experimental agentic capabilities. Furthermore, third-party developers will have the opportunity to create their own AI agents, utilizing the same framework outlined by Microsoft.