Intel Achieves Full NPU Support in MLPerf Client v0.6 Benchmark

Intel becomes the first to offer full neural processing unit support in the MLPerf Client v0.6 benchmark, demonstrating advancements in large language model performance for Artificial Intelligence on client devices.

Intel has announced a significant milestone in Artificial Intelligence hardware by becoming the sole company to achieve full neural processing unit (NPU) support in the MLPerf Client v0.6 benchmark. This new version of the benchmark delivers the industry´s first standardized assessment of large language model (LLM) performance on client NPUs, specifically evaluating the real-world capabilities of local machine learning workloads on personal computing devices.

According to Intel, its Core Ultra Series 2 processors demonstrated the ability to generate output on both the GPU and the NPU at speeds far exceeding typical human reading rates. This accomplishment highlights the acceleration and efficiency improvements possible with hardware-optimized Artificial Intelligence support at the client level, as opposed to relying solely on traditional CPUs or cloud-based solutions.

Intel attributes its leadership in this benchmark to the company´s ongoing hardware-software co-optimization efforts. Daniel Rogers, vice president and general manager of PC Product Marketing at Intel, underscored the importance of these achievements in democratizing Artificial Intelligence for general PC users. By enabling comprehensive NPU and GPU acceleration for Artificial Intelligence workloads on client PCs, Intel aims to set a new standard in both performance and accessibility across the industry.

62

Impact Score

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend