Artificial Intelligence agents are increasingly being deployed to automate tasks such as sending emails, drafting documents, and editing databases on users´ behalf. Despite significant investments, the performance of these agents remains inconsistent due to difficulties integrating with a wide variety of digital tools in users´ environments. With the digital world built around structured connections like application programming interfaces and Artificial Intelligence models relying on the less predictable medium of natural language, agents struggle to effectively understand, retrieve, and act on required information.
Two prominent initiatives, Anthropic´s Model Context Protocol (MCP) and Google´s Agent2Agent (A2A) protocol, are seeking to standardize how Artificial Intelligence agents interact both with other programs and each other. MCP acts as a translation layer, making it easier for agents to communicate with various applications through natural language, while A2A aims to moderate and coordinate exchanges among multiple agents, a move seen as pivotal as artificial intelligence evolves beyond isolated, single-process roles. Usage is growing rapidly—over 15,000 MCP servers are already catalogued, and more than 150 companies partner on A2A, including industry heavyweights like Salesforce and Adobe.
However, these protocols face three main hurdles: security, openness, and efficiency. Security risks are pronounced because agents with delegated control could be manipulated through techniques like indirect prompt injections, potentially exposing sensitive data. While MCP and A2A currently lack strong security mechanisms, their standardization could aid in future risk mitigation, though skepticism remains among security experts. In terms of openness, both protocols are open source, encouraging collaboration and faster improvement, yet governance models differ, sparking debate over control and inclusivity. Efficiency is another concern, as natural language adds interpretive overhead and increases operational costs, especially when agents’ internal communications—never seen by human users—consume extensive resources. Critics suggest that although natural language is accessible, it lacks the precision and compactness of code-based interfaces, and may bottleneck future scalability. Still, with robust development and wider industry participation, these protocols represent crucial steps toward more capable and trustworthy Artificial Intelligence automation.