Enterprises are accelerating deployment of agentic artificial intelligence as copilots, assistants, and autonomous task-runners, but most are failing to scale beyond pilots. In late 2025, nearly two-thirds of companies were experimenting with artificial intelligence agents, while 88% were using artificial intelligence in at least one business function, up from 78% in 2024, yet only one in 10 companies actually scaled their artificial intelligence agents. Experts attribute the bottleneck less to model capabilities and more to weak data foundations that do not supply reliable business context to humans and agents. To achieve quick wins and long-term value, leaders are urged to adopt an artificial intelligence mindset grounded in trustworthy, well-governed data.
High-value data for agents is defined by business context rather than by whether it is structured or unstructured. Data for critical functions such as supply chain operations and financial planning is inherently context dependent, while fine-grained telemetry such as IoT data and logs can create value only when combined with appropriate business meaning. The greater risk for agentic artificial intelligence is lack of grounding, which contributes to a “trust debt” in organizations where two-thirds of business leaders do not fully trust their data, according to the Institute for Data and Enterprise AI. Overcoming that trust gap requires shared definitions, semantic consistency, and reliable operational context applied through integration, on-site enrichment, and governance pipelines.
Cloud-driven separation of compute and storage delivered flexibility but also triggered significant data sprawl, with data scattered across clouds, data lakes, warehouses, and many SaaS applications. As companies move to artificial intelligence, more than two-thirds of companies cite data siloes as a top challenge in adopting artificial intelligence, with more than half of enterprises struggling with 1,000 data sources or more. The emerging priority is a semantic or knowledge layer that spans platforms, encodes business rules and relationships, and offers a governed, business-contextual view of data for humans and agents. Yet only four in 10 companies believe their data management process is ready for artificial intelligence, down from 43% the previous year, underscoring growing awareness of infrastructure gaps as deployments advance.
Agentic artificial intelligence is expected to become a new layer on top of existing stacks rather than a replacement for SaaS, with value moving up from infrastructure to platforms, SaaS, and now agents. In this reshaped stack, SaaS systems remain systems of record, the semantic layer becomes the source of business context, and artificial intelligence agents form an engagement layer that orchestrates across systems while humans and agents are treated as first-class users of business logic. Agents cannot and should not connect to every operational backend, which further elevates the need for a business-aware fabric layer.
Most enterprises are advised to start from where their data already resides, in environments such as Snowflake, Databricks, Google BigQuery, or SAP, while avoiding recreating vendor lock-in. Priorities include preserving and exposing business context around operational and application data, investing early in governance and semantics with shared policies, access rules, and models, and favoring open, fabric-style interoperability over consolidating all data into a single stack. Leaders are cautioned not to seek full automation of critical processes too early, since meaningful oversight will remain necessary. Early gains are expected from less-critical processes and from agents working on fresh, stateful data rather than static dashboards, with organizations then deciding how to reinvest artificial intelligence-driven efficiencies into growth or new market opportunities.
