Google´s recent showcase of ´agentic experiences´ at I/O 2025 signaled a shift in the conversation around intelligent digital agents. The company´s demonstration featured an assistant that proactively handled a series of complex real-world tasks: sourcing a user manual, finding tutorials, and even coordinating with a local store for parts—all with minimal human guidance. Beyond Google´s own products, the Agent-to-Agent (A2A) protocol was introduced as an open standard poised to enable cross-company agent collaboration.
This vision of digital agents acting as automated coworkers—handling everything from travel booking to expense reports and operating in tandem—has drawn significant interest. Yet there´s a growing risk that the excitement outpaces what these systems can reliably deliver. The term ´agent´ is being loosely applied, ranging from basic scripts to nuanced Artificial Intelligence workflows, which creates marketing confusion and risks consumer disillusionment. The lack of clear definitions invites so-called ´agentwashing,´ where routine automation is misleadingly branded as cutting-edge intelligence. Without explicit expectations about autonomy, reliability, and performance, both users and businesses may find their ambitions unmet.
Reliability remains a major concern, largely because today´s agents depend heavily on large language models prone to unpredictable, probabilistic errors—especially during intricate, multistep tasks involving external data or services. A notable case involved an Artificial Intelligence support agent for the programming tool Cursor fabricating a usage policy, leading to user backlash and subscription cancellations. Such incidents illustrate the need to treat these systems as more than stand-alone models; robust architectures must include safeguards around uncertainty, accuracy, data use, and policy compliance. Companies like AI21 Labs are already enhancing reliability by integrating company data, layered controls, and structured workflows to move beyond the LLM’s inherent unpredictability.
The agent vision also depends on seamless interagent cooperation, which A2A aspires to support. However, the current approach addresses communication syntax, not shared semantics or context, leaving agents unable to fully understand or leverage each other´s capabilities. This challenge mirrors long-standing problems in distributed computing. Also, since agents may represent conflicting corporate or customer incentives, optimizing coordination poses additional hurdles—requiring solutions in mechanisms, contracts, and game theory, not just protocol design.
While these technical and organizational challenges are solvable, the industry risks a backlash if obstacles are downplayed and hype oversells near-term capabilities. For agent-based Artificial Intelligence to become foundational rather than fad, developers and companies must prioritize clarity, robustness, and aligned expectations—and resist the urge to coast on buzzwords. Thoughtful standards and collaboration can ensure digital agents fulfill their genuine potential rather than joining the tech world’s graveyard of overpromised trends.