Recent stock market volatility around software and services companies has highlighted a growing disconnect between how enterprises talk about artificial intelligence risk and where disruption is actually showing up. A seemingly modest “Plugins” feature inside Anthropic’s Claude Cowork desktop app triggered what Bloomberg described as the biggest stock selloff driven by fear of artificial intelligence displacement that markets have seen. While attention initially focused on software-as-a-service names, professional services, information businesses and marketing and PR groups were reported to be down 5 – 20% by Thursday as investors abruptly repriced the threat of automation to knowledge-intensive work.
Inside many organisations, risk and compliance discussions remain fixated on familiar external threats such as manipulation, “poisoning” attacks and regulatory gaps, often expressed as abstract scores on scales of one to ten. In practice, a different set of vulnerabilities is emerging. Large organisations may boast detailed artificial intelligence policies, mandate tools like Microsoft Copilot internally and confine ChatGPT to personal accounts with no corporate data allowed. Yet network traffic reveals that when sanctioned tools are clunky or constrained, real work flows to unsanctioned services, outside GDPR, outside corporate controls, without training and with no oversight. This creates what the author calls “compliance theatre”, where the letter of governance frameworks is met while client data and corporate knowledge leak across unmanaged channels, and where the larger unmeasured risk is rapid competitive obsolescence rather than headline breaches.
The firms suffering most in the latest selloff are not those that moved too quickly on artificial intelligence, but those that waited, or spent heavily on tools while starving learning and development. A key soft spot is undertrained, undersupported staff whose natural curiosity about new tools is not matched by artificial intelligence literacy, leading to accidental misuse and missed opportunities for innovation. Leaders and even some IT directors remain sceptical or out of date on large language models, creating a “parallel universe” gap between those working deeply with systems like Claude Cowork and those making strategic calls about them. As with the web’s earlier platform shift, the core question for boards is which risk they choose to measure: the risk of systems being manipulated or misused, or the risk of discovering via their share price that they ended up on the wrong side of artificial intelligence going right.
