As Artificial Intelligence agents go mainstream, companies lean into confidential computing for data security

Enterprises are embracing confidential computing to secure models, data, and agent workflows as Artificial Intelligence deployments expand. Big tech firms are rolling out hardware-backed protections, but experts warn the approach still faces reliability and vulnerability challenges.

Enterprises are accelerating adoption of confidential computing as Artificial Intelligence agents begin handling sensitive data and workflows across IT environments. Analysts and executives cited regulatory pressure for auditability in sectors such as healthcare and financial services, and a broader need to prevent unauthorized access to protected data. Confidential computing establishes a hardware-enforced boundary that locks models and data, releasing information only to authorized models and agents.

The approach aligns with enterprises seeking control through private cloud Artificial Intelligence strategies. Google now allows companies to run its Gemini models entirely in-house, without an internet or Google Cloud connection, by using confidential computing on Nvidia GPUs. Although Gemini is designed for Google’s TPUs, an exported model can operate inside a confidential virtual machine on Nvidia hardware, protecting both Google’s model intellectual property and enterprise data. Attestation technology verifies that only authorized users and environments can access the model and outputs.

Vendors point to demand for local data processing, low-latency decision making, and data residency compliance as key drivers. Analysts also highlight that GPUs offer a mix of performance and security well suited to regulated industries, including healthcare, finance, and the public sector. Beyond Google, Meta has begun using what it calls Private Processing to power WhatsApp’s new generative summary feature, which creates private message summaries that are not visible to Meta or third parties. Meta built a private computing environment on AMD and Nvidia GPUs so WhatsApp data can be processed securely while minimizing exposure as it moves to the cloud.

The confidential computing momentum extends further. Anthropic introduced Confidential Inference to provide security guarantees and a trusted chain for data moving through models and increasingly agentic inference pipelines. Apple has promoted its Private Cloud Compute ecosystem, and chipmakers AMD and Intel offer CPU-based confidential computing through virtual machines for non Artificial Intelligence workloads as well. These efforts reflect a broader push to secure both model execution and data flow.

Despite progress, experts caution that cloud implementations remain delicate. Data typically travels to GPUs through CPUs, and any weakness in that path can undermine attestation and open gaps for attackers. CPU-based technologies can be susceptible to side-channel attacks, and a Google-disclosed vulnerability last December affected AMD confidential computing, requiring microcode updates. As organizations deploy agentic Artificial Intelligence at scale, the industry must prove confidential computing can withstand real-world adversaries while meeting stringent compliance requirements.

70

Impact Score

Saudi Artificial Intelligence startup launches Arabic LLM

Misraj Artificial Intelligence unveiled Kawn, an Arabic large language model, at AWS re:Invent and launched Workforces, a platform for creating and managing Artificial Intelligence agents for enterprises and public institutions.

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.