Why businesses must act now on agentic Artificial Intelligence risk

Businesses are moving from generative tools to autonomous Artificial Intelligence agents that can execute tasks with limited human input. That shift is creating urgent governance, security, and accountability risks, underscored by recent concerns around OpenClaw.

Businesses are moving beyond generative Artificial Intelligence tools and starting to deploy agentic Artificial Intelligence systems that can execute tasks, coordinate tools, make decisions, and run processes with minimal human input. Gartner Inc. says 40% of enterprise applications will be integrated with task-specific Artificial Intelligence agents by the end of 2026, up from less than 5% in 2025. This shift promises efficiency gains and deeper collaboration between humans and Artificial Intelligence, but it also makes governance, oversight, and accountability much harder as decision-making spreads across multiple autonomous agents.

Those risks have become more concrete with OpenClaw, a tool created in late 2025 by developer Peter Steinberger as a “weekend project” that quickly gained popularity for connecting multiple Artificial Intelligence agents and giving them shared access to systems. A report in February 2026 identified more than 42,000 exposed OpenClaw control panels across 82 countries, many of which had full system access. Researchers also found nearly 50,000 devices vulnerable to remote code execution. A separate cybersecurity investigation uncovered a misconfigured database exposing 1.5 million authentication tokens, along with tens of thousands of email addresses and private Artificial Intelligence-to-Artificial Intelligence communications. Because OpenClaw often connects to other services, the exposure could extend to emails, calendars, messaging platforms, social media accounts, and browsers.

Regulators are already responding. In February 2026, the Dutch data protection authority, Autoriteit Persoonsgegevens (AP), warned organisations against using OpenClaw and similar experimental tools, especially in environments handling sensitive data. The warning also challenged the assumption that locally run systems are automatically secure. For many organisations, the problem is not limited to uninstalling a tool. Visibility is a major issue, particularly when developers or employees adopt unapproved systems without formal approval.

Shadow Artificial Intelligence is a central concern. A Microsoft study from October 2025 suggested that 71% of UK employees admitted using unapproved Artificial Intelligence tools at work. OpenClaw’s integrations with WhatsApp, Telegram, Discord, Slack and Teams can make remediation more difficult because credentials and access tokens may need to be reset across multiple systems. Practical responses include blocking prohibited applications on corporate networks, checking social media exposure, improving staff literacy, deploying data loss prevention and specialist Shadow Artificial Intelligence monitoring tools, and tightening contracts and due diligence for subcontracted developers.

Artificial Intelligence literacy is increasingly framed as a regulatory expectation, including under the EU Artificial Intelligence Act, and organisations are being pushed to ensure staff understand both the opportunities and the risks of these systems. Formal assessments such as a DPIA or AIIA may also be necessary to ensure legal and compliance obligations are addressed. Stronger oversight, better training, and a structured risk approach are presented as essential for businesses that want to adopt agentic Artificial Intelligence with confidence.

52

Impact Score

Apple explores Intel chip manufacturing alliance

Apple has reached a preliminary agreement with Intel to manufacture some chips for its devices, reflecting mounting pressure on semiconductor supply chains as Artificial Intelligence demand absorbs advanced capacity. The move also aligns with Washington’s push to expand domestic chip production and revive Intel’s foundry business.

US signals proactive approach on Artificial Intelligence regulation

US federal and state agencies are showing signs of a more proactive stance on Artificial Intelligence oversight, especially around security. The shift contrasts with more sector-specific or horizontal regulatory models emerging in the UK, Europe, Singapore and Japan.

Intel confirms ongoing product collaboration with NVIDIA

Intel CEO Lip-Bu Tan says work with NVIDIA is continuing, with new products expected from a partnership announced late last year. The collaboration points to deeper integration across client and server chips, including GeForce RTX graphics in Intel SoCs and customized Xeon processors for NVIDIA systems.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.