Intel brings AVX10.2 to desktop starting with ´Nova Lake´ processors

Intel plans to restore 512-bit AVX-512 and introduce AVX10.2 to desktop chips with ´Nova Lake´, a move intended to boost performance for local Artificial Intelligence workloads and optimized applications.

Leaked reporting from X user @InstLatX64 says Intel is preparing to reintroduce expanded vector instruction support to its client processors. The return will include AVX10.2 and 512-bit AVX-512 capabilities, and the technology could arrive as early as the launch of ´Nova Lake´. The leak is notable because Intel had previously removed AVX-512 from mainstream desktop families, limiting fully accelerated 512-bit paths to server-grade xeon parts.

The reason for the earlier removal is technical. Client chips that combine performance cores and efficiency cores ran into compatibility problems: the smaller efficiency cores did not fully support AVX-512, and Intel chose to disable the feature across alder lake and raptor lake consumer lines to avoid functional mismatches. That decision meant desktop users did not benefit from native 512-bit vector processing, even though certain workloads can gain meaningful speedups from wider single-instruction data paths.

By contrast, amd adopted full AVX-512 support with ´Zen 5´, enabling the wider instructions across both desktop and server processors. That implementation avoided the prior emulation approach, where 512-bit operations were split into two 256-bit operations and processed in two cycles. Eliminating emulation reduces overhead and improves throughput for software that is compiled and tuned for these instructions. The competition in instruction set support has clear performance ramifications for optimized applications, from scientific computing to multimedia and machine learning inference.

Reintroducing AVX10.2 and AVX-512 on client silicon signals that Intel wants to close the gap as more workloads move toward on-device processing. Local Artificial Intelligence models, in particular, stand to benefit when chips can perform wider vector math in fewer cycles. Software ecosystems will need updated compilers and libraries to exploit the changes, and developers who tune for wide vectors could see measurable gains. For consumers, the shift promises more headroom for demanding tasks, and for Intel it restores an important capability in the mainstream processor lineup.

66

Impact Score

Saudi Artificial Intelligence startup launches Arabic LLM

Misraj Artificial Intelligence unveiled Kawn, an Arabic large language model, at AWS re:Invent and launched Workforces, a platform for creating and managing Artificial Intelligence agents for enterprises and public institutions.

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.