PCI-SIG announces PCIe 7.0 specification with 128.0 GT/s transfer rates

PCI-SIG unveils PCIe 7.0, targeting Artificial Intelligence, 800G ethernet, and quantum computing with blazing fast 128.0 GT/s throughput.

PCI-SIG has officially released the PCI Express (PCIe) 7.0 specification, pushing performance boundaries to a raw bit rate of 128.0 GT/s and supporting bidirectional throughput up to 512 GB/s in an x16 configuration. The new standard leverages advanced technologies like pulse amplitude modulation with four levels (PAM4) signaling and Flit-based encoding to drive faster data transfers and improved protocol efficiency. Despite this leap in performance, PCIe 7.0 retains backwards compatibility with prior PCI Express generations, easing the industry’s transition to next-generation platforms.

Key features introduced in PCIe 7.0 address the escalating bandwidth needs of cutting-edge applications across Artificial Intelligence, machine learning, 800G ethernet, large-scale cloud infrastructure, and quantum computing. The integration of PAM4 signaling technology represents a significant shift from previous standards, boosting channel capacity and making PCIe 7.0 suitable for demanding, data-driven workloads. Improved power efficiency further positions this standard as an attractive option for hyperscale data centers and high-performance computing environments striving to optimize both throughput and energy consumption.

Looking beyond the latest announcement, PCI-SIG has already commenced pathfinding efforts for PCIe 8.0, seeking to ensure sustained engineering momentum and continual enhancement of the PCI Express ecosystem. This continued roadmap reflects the organization´s commitment to supporting both current and future product requirements from cloud service providers, semiconductor manufacturers, and enterprise IT stakeholders. With every new specification, PCI-SIG reinforces its leadership in defining interconnect standards that power modern digital infrastructure.

73

Impact Score

Artificial Intelligence speeds quantum encryption threat timeline

Research from Google and Oratomic suggests quantum computers capable of breaking core internet encryption may arrive sooner than expected. Artificial Intelligence played a key role in improving one of the new algorithms, raising fresh urgency around post-quantum security.

New methods aim to improve Large Language Model reasoning

A new study on arXiv outlines algorithmic techniques designed to strengthen Large Language Model reasoning and reduce hallucinations. The work reports better logical consistency and stronger performance on mathematical and coding benchmarks.

Nvidia acquisition of SchedMD raises Slurm neutrality concerns

Nvidia’s purchase of SchedMD has given it control of Slurm, an open-source scheduler that sits at the center of many supercomputing and large-model training systems. Researchers and engineers are watching for signs that support could tilt toward Nvidia hardware over AMD and Intel alternatives.

Mustafa Suleyman says Artificial Intelligence compute growth is still accelerating

Mustafa Suleyman argues that Artificial Intelligence development is being propelled by simultaneous advances in chips, memory, networking, and software efficiency rather than nearing a hard limit. He contends that rising compute capacity and falling deployment costs will push systems beyond chatbots toward more capable agents.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.