Samsung plans z-nand comeback with 15x performance claim

Samsung says its revived Z-NAND will deliver up to 15x peak performance and about 80% lower power consumption, and it proposes direct GPU access to accelerate Artificial Intelligence model transfers.

Samsung has announced plans to revive z-nand, positioning it as a high-performance flash tier aimed at narrowing the gap between memory and mass storage for Artificial Intelligence workloads. The company claims the new z-nand can hit up to 15x the peak performance of conventional nand while cutting power use by about 80 percent. It also unveiled a mechanism for GPUs and GPU-based accelerators to access z-nand directly, a concept analogous to DirectStorage in gaming but tailored to moving large model data between accelerators and persistent media.

The announcement carries caveats. Samsung has not published detailed benchmarks or defined the performance metrics behind the headline numbers, so the claims cannot be independently verified yet. Historically, z-nand delivered lower access latency and strong IOPS but only modest density gains, and high cost limited adoption. That pattern echoes the trajectory of Intel´s 3DXPoint, which offered latency and low queue depth advantages but was discontinued after failing to find broad commercial traction. At the same time, rivals are pursuing other approaches: Kioxia is pushing xl-flash for very high IOPS, and industry groups are standardizing High Bandwidth Flash to boost throughput. The evolving Artificial Intelligence demand curve makes the timing more favorable, but it also raises expectations for concrete, system-level results.

Real market impact will hinge on three practical factors: verifiable benchmarks, price competitiveness, and ecosystem support. Performance numbers need to appear in independent tests and in real-world server and accelerator environments. Pricing must be attractive versus alternatives that trade density and cost for latency or throughput, and software and hardware hooks must exist so accelerators, hypervisors, and storage stacks can exploit direct-access modes. If Samsung delivers consistent latency advantages, meaningful throughput at scale, and an integrated stack that includes drivers and cloud vendor support, z-nand could carve a niche in Artificial Intelligence infrastructure. Until then, the announcement is promising but remains mostly a marketing claim pending proof in deployed systems.

64

Impact Score

Saudi Artificial Intelligence startup launches Arabic LLM

Misraj Artificial Intelligence unveiled Kawn, an Arabic large language model, at AWS re:Invent and launched Workforces, a platform for creating and managing Artificial Intelligence agents for enterprises and public institutions.

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.