Businesses are moving beyond generative Artificial Intelligence tools and starting to deploy agentic Artificial Intelligence systems that can execute tasks, coordinate tools, make decisions, and run processes with minimal human input. Gartner Inc. says 40% of enterprise applications will be integrated with task-specific Artificial Intelligence agents by the end of 2026, up from less than 5% in 2025. This shift promises efficiency gains and deeper collaboration between humans and Artificial Intelligence, but it also makes governance, oversight, and accountability much harder as decision-making spreads across multiple autonomous agents.
Those risks have become more concrete with OpenClaw, a tool created in late 2025 by developer Peter Steinberger as a “weekend project” that quickly gained popularity for connecting multiple Artificial Intelligence agents and giving them shared access to systems. A report in February 2026 identified more than 42,000 exposed OpenClaw control panels across 82 countries, many of which had full system access. Researchers also found nearly 50,000 devices vulnerable to remote code execution. A separate cybersecurity investigation uncovered a misconfigured database exposing 1.5 million authentication tokens, along with tens of thousands of email addresses and private Artificial Intelligence-to-Artificial Intelligence communications. Because OpenClaw often connects to other services, the exposure could extend to emails, calendars, messaging platforms, social media accounts, and browsers.
Regulators are already responding. In February 2026, the Dutch data protection authority, Autoriteit Persoonsgegevens (AP), warned organisations against using OpenClaw and similar experimental tools, especially in environments handling sensitive data. The warning also challenged the assumption that locally run systems are automatically secure. For many organisations, the problem is not limited to uninstalling a tool. Visibility is a major issue, particularly when developers or employees adopt unapproved systems without formal approval.
Shadow Artificial Intelligence is a central concern. A Microsoft study from October 2025 suggested that 71% of UK employees admitted using unapproved Artificial Intelligence tools at work. OpenClaw’s integrations with WhatsApp, Telegram, Discord, Slack and Teams can make remediation more difficult because credentials and access tokens may need to be reset across multiple systems. Practical responses include blocking prohibited applications on corporate networks, checking social media exposure, improving staff literacy, deploying data loss prevention and specialist Shadow Artificial Intelligence monitoring tools, and tightening contracts and due diligence for subcontracted developers.
Artificial Intelligence literacy is increasingly framed as a regulatory expectation, including under the EU Artificial Intelligence Act, and organisations are being pushed to ensure staff understand both the opportunities and the risks of these systems. Formal assessments such as a DPIA or AIIA may also be necessary to ensure legal and compliance obligations are addressed. Stronger oversight, better training, and a structured risk approach are presented as essential for businesses that want to adopt agentic Artificial Intelligence with confidence.
