Microsoft's recent update highlights the risks associated with its new "Experimental Agentic Features" in AI, which are designed to interact with user applications and files. These AI agents can perform complex tasks but may also produce unexpected outputs and introduce security risks, such as cross-prompt injection (XPIA), leading to potential data exfiltration or malware installation. While Microsoft emphasizes the need for human oversight in AI-generated decisions, concerns about data integrity and system safety persist. The term "hallucinations" is used to describe instances of erroneous outputs from AI, suggesting a broader issue within generative AI technology. Currently, Windows 11’s agentic workspace feature is disabled by default, but the long-term status of this safeguard is uncertain as Microsoft integrates AI further into its products.