SK hynix and Nvidia plan 100M IOPS artificial intelligence NAND by 2027

SK hynix is collaborating with Nvidia on next generation artificial intelligence focused NAND that aims to dramatically increase I/O performance for data center and on-device workloads by 2027.

SK hynix is pushing hard on next-gen artificial intelligence NAND, and 2027 seems to be its next major milestone, with the company collaborating with Nvidia on ultra-fast artificial intelligence focused NAND chips intended to greatly raise storage performance. According to a ZDNet report, the company is working with Nvidia to build ultra-fast artificial intelligence focused NAND chips that could hit up to 30x the performance of today’s enterprise SSDs. Early samples are planned for late 2026, with second-gen entering mass production by the end of 2027, marking a staged rollout aimed at quickly bringing the new architecture into real-world deployments.

A central pillar of this push is the SK hynix AI-N P high-performance SSD architecture, which is aimed at removing I/O bottlenecks in large artificial intelligence inference workloads where storage throughput can limit accelerator utilization. ZDNet says the redesigned NAND and controller are already in proof-of-concept (PoC) testing with Nvidia, indicating the partners are validating both the media and controller stack together. SK hynix is targeting 25 million IOPS on PCIe Gen 6 for first samples next year, and 100 million IOPS for the production version in 2027. For comparison, current enterprise SSDs manage roughly 2-3 million IOPS (high-end models), which highlights how aggressively SK hynix is trying to exceed existing data center storage performance levels.

The roadmap also includes AI-N B, better known as HBF (High Bandwidth Flash), which is developed with Sandisk to address bandwidth-bound workloads alongside the IOPS-focused AI-N P products. An alpha spec is expected in early 2026 with evaluation units coming in 2027, giving partners time to test and integrate the technology. Behind all these efforts a broader strategy starts to unveil with SK hynix next-gen artificial intelligence NAND chips split into three. First we have AI-N P (ultra-high performance SSD) for performance, AI-N B (High Bandwidth Flash) for bandwidth, and AI-N D (High Capacity/Low Cost SSD) for higher-capacity and lower-cost designs. ZDNet adds that SK hynix views the artificial intelligence market as two distinct fronts: large data-center deployments demanding massive throughput, and on-device artificial intelligence favoring low-power efficiency, with SK hynix positioning its AI-N lineup to serve both with the 2026 AI-N P generation expected to offer roughly 8-10x the performance of current SSDs.

68

Impact Score

OpenAI debuts GPT-5.2 on NVIDIA infrastructure for large scale training

OpenAI has launched GPT-5.2, describing it as its most capable model series yet for professional knowledge work, trained and deployed on NVIDIA’s full stack Artificial Intelligence infrastructure. The release underscores the growing importance of massive pretraining and post-training at scale using thousands of GPUs.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.