Tachyum unveils 2 nm Prodigy universal processor for Artificial Intelligence rack efficiency

Tachyum announced a 2 nm Prodigy Universal Processor aimed at running much larger Artificial Intelligence models at a fraction of current cost, and claimed dramatic rack-level performance gains versus NVIDIA Rubin systems.

Tachyum announced details and specifications for its 2 nm Prodigy Universal Processor, positioned to support Artificial Intelligence models with parameter counts many orders of magnitude larger than existing solutions while reducing cost. The company described the design as a universal processor and said the product family targets significantly higher rack efficiency for inference workloads compared with current alternatives.

In published comparisons, Tachyum said Prodigy Ultimate delivers up to 21.3x higher Artificial Intelligence rack performance than the NVIDIA Rubin Ultra NVL576, while Prodigy Premium delivers up to 25.8x higher Artificial Intelligence rack performance than the Vera Rubin 144. The announcement contrasts the Prodigy family with existing Rubin systems and presents the new chips as a substantial multiplier on rack-level inference throughput.

Tachyum also stated that the 2 nm Prodigy is the first chip to exceed 1,000 PFLOPs on inference and that technical details for the design will be published within a week. For context provided in the announcement, NVIDIA Rubin was cited as delivering 50 PFLOPs. The company emphasized the combination of process node, claimed inference performance, and cost efficiency as the primary differentiators for the Prodigy Universal Processor lineup.

62

Impact Score

YouTube expands deepfake detection to Hollywood talent

YouTube is opening its likeness protection system to actors, athletes, musicians and creators beyond its own platform. The move gives public figures a way to flag and request removal of damaging Artificial Intelligence-generated replicas while YouTube weighs broader rules and possible future monetization.

Adobe plans outcome-based pricing for Artificial Intelligence agents

Adobe is positioning its Artificial Intelligence agents around performance-based pricing, charging only when the software completes useful work. The approach points to a more results-oriented model for selling generative Artificial Intelligence tools to business customers.

Tech firms commit billions to Artificial Intelligence infrastructure

Amazon, OpenAI, Nvidia, Meta, Google and others are signing increasingly large cloud, chip and data center agreements as demand for Artificial Intelligence infrastructure accelerates. The latest wave of deals spans investments, compute purchases, chip supply agreements and data center buildouts.

JEDEC outlines LPDDR6 expansion for data centers

JEDEC has previewed planned updates to LPDDR6 aimed at pushing the memory standard beyond mobile devices and into selected data center and accelerated computing use cases. The roadmap includes higher-capacity packaging options, flexible metadata support, 512 GB densities, and a new SOCAMM2 module standard.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.