Enterprises are accelerating adoption of confidential computing as Artificial Intelligence agents begin handling sensitive data and workflows across IT environments. Analysts and executives cited regulatory pressure for auditability in sectors such as healthcare and financial services, and a broader need to prevent unauthorized access to protected data. Confidential computing establishes a hardware-enforced boundary that locks models and data, releasing information only to authorized models and agents.
The approach aligns with enterprises seeking control through private cloud Artificial Intelligence strategies. Google now allows companies to run its Gemini models entirely in-house, without an internet or Google Cloud connection, by using confidential computing on Nvidia GPUs. Although Gemini is designed for Google’s TPUs, an exported model can operate inside a confidential virtual machine on Nvidia hardware, protecting both Google’s model intellectual property and enterprise data. Attestation technology verifies that only authorized users and environments can access the model and outputs.
Vendors point to demand for local data processing, low-latency decision making, and data residency compliance as key drivers. Analysts also highlight that GPUs offer a mix of performance and security well suited to regulated industries, including healthcare, finance, and the public sector. Beyond Google, Meta has begun using what it calls Private Processing to power WhatsApp’s new generative summary feature, which creates private message summaries that are not visible to Meta or third parties. Meta built a private computing environment on AMD and Nvidia GPUs so WhatsApp data can be processed securely while minimizing exposure as it moves to the cloud.
The confidential computing momentum extends further. Anthropic introduced Confidential Inference to provide security guarantees and a trusted chain for data moving through models and increasingly agentic inference pipelines. Apple has promoted its Private Cloud Compute ecosystem, and chipmakers AMD and Intel offer CPU-based confidential computing through virtual machines for non Artificial Intelligence workloads as well. These efforts reflect a broader push to secure both model execution and data flow.
Despite progress, experts caution that cloud implementations remain delicate. Data typically travels to GPUs through CPUs, and any weakness in that path can undermine attestation and open gaps for attackers. CPU-based technologies can be susceptible to side-channel attacks, and a Google-disclosed vulnerability last December affected AMD confidential computing, requiring microcode updates. As organizations deploy agentic Artificial Intelligence at scale, the industry must prove confidential computing can withstand real-world adversaries while meeting stringent compliance requirements.