As Artificial Intelligence agents go mainstream, companies lean into confidential computing for data security

Enterprises are embracing confidential computing to secure models, data, and agent workflows as Artificial Intelligence deployments expand. Big tech firms are rolling out hardware-backed protections, but experts warn the approach still faces reliability and vulnerability challenges.

Enterprises are accelerating adoption of confidential computing as Artificial Intelligence agents begin handling sensitive data and workflows across IT environments. Analysts and executives cited regulatory pressure for auditability in sectors such as healthcare and financial services, and a broader need to prevent unauthorized access to protected data. Confidential computing establishes a hardware-enforced boundary that locks models and data, releasing information only to authorized models and agents.

The approach aligns with enterprises seeking control through private cloud Artificial Intelligence strategies. Google now allows companies to run its Gemini models entirely in-house, without an internet or Google Cloud connection, by using confidential computing on Nvidia GPUs. Although Gemini is designed for Google’s TPUs, an exported model can operate inside a confidential virtual machine on Nvidia hardware, protecting both Google’s model intellectual property and enterprise data. Attestation technology verifies that only authorized users and environments can access the model and outputs.

Vendors point to demand for local data processing, low-latency decision making, and data residency compliance as key drivers. Analysts also highlight that GPUs offer a mix of performance and security well suited to regulated industries, including healthcare, finance, and the public sector. Beyond Google, Meta has begun using what it calls Private Processing to power WhatsApp’s new generative summary feature, which creates private message summaries that are not visible to Meta or third parties. Meta built a private computing environment on AMD and Nvidia GPUs so WhatsApp data can be processed securely while minimizing exposure as it moves to the cloud.

The confidential computing momentum extends further. Anthropic introduced Confidential Inference to provide security guarantees and a trusted chain for data moving through models and increasingly agentic inference pipelines. Apple has promoted its Private Cloud Compute ecosystem, and chipmakers AMD and Intel offer CPU-based confidential computing through virtual machines for non Artificial Intelligence workloads as well. These efforts reflect a broader push to secure both model execution and data flow.

Despite progress, experts caution that cloud implementations remain delicate. Data typically travels to GPUs through CPUs, and any weakness in that path can undermine attestation and open gaps for attackers. CPU-based technologies can be susceptible to side-channel attacks, and a Google-disclosed vulnerability last December affected AMD confidential computing, requiring microcode updates. As organizations deploy agentic Artificial Intelligence at scale, the industry must prove confidential computing can withstand real-world adversaries while meeting stringent compliance requirements.

70

Impact Score

A blueprint for implementing RAG at scale

Retrieval-augmented generation is positioned as essential for most large language model applications because it injects company-specific knowledge into responses. For organizations rolling out generative Artificial Intelligence, the approach promises higher accuracy and fewer hallucinations.

How artificial intelligence will accelerate biomedical research and discovery

A Microsoft Research Podcast episode brings together Daphne Koller, Noubar Afeyan, and Eric Topol to examine how artificial intelligence is reshaping biomedicine, from target discovery and autonomous labs to the pursuit of a virtual cell. The discussion charts rapid progress since GPT-4 and what it means for patients, researchers, and regulators.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.