Open artificial intelligence models shown secure for enterprise adoption

A LatticeFlow evaluation with SambaNova found that targeted guardrails can lift security scores of leading open-source generative models from as low as 1.8 percent to 99.6 percent while preserving service quality. The findings position open models as viable for regulated sectors such as financial services.

Zurich, September 23, 2025. A new evaluation led by LatticeFlow, in collaboration with SambaNova, reports quantifiable evidence that open-source generative artificial intelligence models can meet or exceed the security levels of closed systems when equipped with targeted risk guardrails. In controlled tests, security scores for the top open models rose from as low as 1.8 percent to as high as 99.6 percent while maintaining above 98 percent quality of service, indicating that robust protections need not compromise usability. The results support enterprise deployment across a wide range of use cases, including financial services.

The study assessed five widely used open foundation models, including Qwen3-32B, DeepSeek-V3-0324, Llama-4-Maverick-17B-128E-Instruct, DeepSeek-R1, and Meta-Llama-3.3-70B-Instruct. Each model was tested as a base system and as a guardrailed system enhanced with a dedicated input filtering layer to block adversarial prompts. The evaluation focused on cybersecurity risks relevant to enterprises, simulating attack scenarios such as prompt injection and manipulation to measure resilience and the impact of safeguards on overall usability.

Key results showed substantial security gains with guardrails: DeepSeek R1 improved from 1.8 percent to 98.6 percent, Llama-4 Maverick from 33.5 percent to 99.4 percent, Llama-3.3 70B Instruct from 51.8 percent to 99.4 percent, and Qwen3-32B from 56.3 percent to 99.6 percent. All tested models sustained over 98 percent quality of service, underscoring that the added protections did not materially degrade user experience. The data provides a clear benchmark for decision-makers evaluating open models for secure, enterprise-scale deployment.

The report addresses a key barrier to adoption. Many organizations pursue open-source generative artificial intelligence for flexibility, customization, and reduced vendor lock-in, yet progress has often stalled due to a lack of clear, quantifiable security insights. LatticeFlow and SambaNova contend that the new evidence demonstrates open models can be auditable, controllable, and provably secure with the right safeguards, offering a path forward for regulated industries and risk-conscious teams.

LatticeFlow positions its approach within a broader push for rigorous artificial intelligence governance, emphasizing deep technical assessments that enable evidence-based decisions. The company highlights work on an EU Artificial Intelligence Act framework for generative artificial intelligence developed with ETH Zurich and INSAIT, aligning its evaluation methodology with emerging regulatory expectations and enterprise risk requirements.

65

Impact Score

Intuit advances GenOS to accelerate agentic artificial intelligence development

Intuit expanded its GenOS platform with custom financial large language models, expert-in-the-loop tooling, and agent evaluation frameworks to speed agentic artificial intelligence across its products. Early results show improved accuracy and significantly lower latency in accounting workflows, with more agents rolling out soon.

Advancing cutting edge artificial intelligence in health

Google details how breakthroughs like Gemini, Gemma variants, AlphaFold, and enterprise tools are being applied across healthcare. The company highlights research, open models, and clinical search products that aim to make care more personalized, accessible, and effective with Artificial Intelligence.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.