Microsoft warns: Windows 11 agentic features may hallucinate

After installing Windows 11 Build 26220.7262, users will see an optional toggle for Experimental agentic features under Settings > System > Artificial Intelligence Components. Microsoft cautions that as these features roll out, Artificial Intelligence models can still hallucinate and that new security risks tied to autonomous agents are emerging.

Microsoft issued an updated notice after announcing an agentic overhaul for Windows 11, warning that the new capabilities are experimental and imperfect. Following installation of Windows 11 Build 26220.7262, the operating system exposes a new toggle labeled ‘Experimental agentic features’ in Settings > System under ‘Artificial Intelligence Components.’ The feature is optional and must be enabled manually.

The company warned explicitly that “As these capabilities are introduced, Artificial Intelligence models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs.” When users enable the toggle, Windows will display a device warning that the capabilities are experimental and might affect the device. The notice frames functional limitations as a user-facing concern, but highlights security implications as the larger issue.

Microsoft and the article emphasize that new attack techniques tied to autonomous agents are already appearing, with cross-prompt injection singled out as a notable vector. In cross-prompt injection attacks, malicious instructions are concealed inside ordinary documents or interface elements so an autonomous agent follows the hidden instructions instead of its original task. That behavior could allow an agent to install malware, leak payment details, or carry out other harmful actions. The updated notice serves to inform users of the risks and to make clear that agentic features are experimental and require explicit user activation.

55

Impact Score

Pentagon surveillance powers collide with artificial intelligence limits

A dispute between the Pentagon and leading artificial intelligence companies is exposing how far US surveillance law lags behind modern data collection and analysis capabilities. Contracts, not legislation, are currently setting the boundaries for military use of powerful artificial intelligence tools.

Pentagon blacklist of Anthropic over autonomous weapons alarms Europe

The United States decision to label Anthropic a security risk for refusing military use of its technology in autonomous killing and mass surveillance is raising concerns in Europe about the future of responsible Artificial Intelligence in warfare and the credibility of international norms.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.