SK hynix and Nvidia plan 100M IOPS artificial intelligence NAND by 2027

SK hynix is collaborating with Nvidia on next generation artificial intelligence focused NAND that aims to dramatically increase I/O performance for data center and on-device workloads by 2027.

SK hynix is pushing hard on next-gen artificial intelligence NAND, and 2027 seems to be its next major milestone, with the company collaborating with Nvidia on ultra-fast artificial intelligence focused NAND chips intended to greatly raise storage performance. According to a ZDNet report, the company is working with Nvidia to build ultra-fast artificial intelligence focused NAND chips that could hit up to 30x the performance of today’s enterprise SSDs. Early samples are planned for late 2026, with second-gen entering mass production by the end of 2027, marking a staged rollout aimed at quickly bringing the new architecture into real-world deployments.

A central pillar of this push is the SK hynix AI-N P high-performance SSD architecture, which is aimed at removing I/O bottlenecks in large artificial intelligence inference workloads where storage throughput can limit accelerator utilization. ZDNet says the redesigned NAND and controller are already in proof-of-concept (PoC) testing with Nvidia, indicating the partners are validating both the media and controller stack together. SK hynix is targeting 25 million IOPS on PCIe Gen 6 for first samples next year, and 100 million IOPS for the production version in 2027. For comparison, current enterprise SSDs manage roughly 2-3 million IOPS (high-end models), which highlights how aggressively SK hynix is trying to exceed existing data center storage performance levels.

The roadmap also includes AI-N B, better known as HBF (High Bandwidth Flash), which is developed with Sandisk to address bandwidth-bound workloads alongside the IOPS-focused AI-N P products. An alpha spec is expected in early 2026 with evaluation units coming in 2027, giving partners time to test and integrate the technology. Behind all these efforts a broader strategy starts to unveil with SK hynix next-gen artificial intelligence NAND chips split into three. First we have AI-N P (ultra-high performance SSD) for performance, AI-N B (High Bandwidth Flash) for bandwidth, and AI-N D (High Capacity/Low Cost SSD) for higher-capacity and lower-cost designs. ZDNet adds that SK hynix views the artificial intelligence market as two distinct fronts: large data-center deployments demanding massive throughput, and on-device artificial intelligence favoring low-power efficiency, with SK hynix positioning its AI-N lineup to serve both with the 2026 AI-N P generation expected to offer roughly 8-10x the performance of current SSDs.

68

Impact Score

Hyperscalers accelerate custom semiconductor and artificial intelligence infrastructure deals in early 2026

Hyperscale cloud providers are ramping multi-gigawatt semiconductor deals across GPUs, custom accelerators, and optical interconnects, with Meta, Google, OpenAI, and Anthropic locking in long-term capacity. Broadcom, AMD, NVIDIA, Marvell, Intel, and MediaTek are reshaping data center and networking roadmaps around custom artificial intelligence silicon and rack-scale systems.

How NotebookLM navigates copyright, contracts, and privacy in academic use

NotebookLM’s retrieval-augmented design can keep faculty and students on safer legal ground than general Artificial Intelligence chatbots, but only if copyright, publisher terms, and FERPA constraints are respected. Educators are urged to distinguish between fair use, contractual text and data mining limits, and ownership of Artificial Intelligence generated materials.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.