Intel prepares open-source drivers for next-gen Xe3P graphics

Intel is seeding early open-source support for its upcoming Xe3P graphics architecture in Mesa drivers, aiming to have Linux compatibility ready when new GPUs and Nova Lake processors arrive.

Intel has begun early open-source enablement for its next-generation Xe3P graphics architecture within the Mesa OpenGL ‘Iris’ and Vulkan ‘Anvil’ drivers. The current work is not yet aimed at making the drivers fully functional, but instead focuses on creating code paths that can be extended for this graphics IP over time. As a result, preliminary support is still described as being a few weeks away, with additional development required behind the scenes before users will see tangible benefits.

The first versions of Xe3P GPUs are expected to appear this year, with the IP arriving in several configurations across Intel’s product stack. Some variants are planned for the upcoming ‘Nova Lake’ desktop processors targeting the consumer market, which are anticipated later this year. Early signs from the open-source work indicate that ‘Nova Lake-P’ processors will integrate Xe3P-LPG for on-chip graphics, positioning the new architecture as a core part of Intel’s next mobile and desktop platforms.

Beyond integrated graphics, ‘Nova Lake-P’ processors are also set to incorporate additional Xe3P-based IP blocks tailored for specialized tasks. The Xe3P-LPM block will focus on media processing for decoding and encoding workloads, while Xe3P-LPD will handle display output processing. The Xe3P IP is further slated to feature in Intel’s Artificial Intelligence-focused ‘Crescent Island’ inference GPU, which will feature 160 GB of onboard LPDDR5X. Performance expectations for the Xe3P GPU family have not yet been disclosed, leaving prospective users and developers waiting for more detailed benchmarks and claims.

50

Impact Score

Adobe plans outcome-based pricing for Artificial Intelligence agents

Adobe is positioning its Artificial Intelligence agents around performance-based pricing, charging only when the software completes useful work. The approach points to a more results-oriented model for selling generative Artificial Intelligence tools to business customers.

Tech firms commit billions to Artificial Intelligence infrastructure

Amazon, OpenAI, Nvidia, Meta, Google and others are signing increasingly large cloud, chip and data center agreements as demand for Artificial Intelligence infrastructure accelerates. The latest wave of deals spans investments, compute purchases, chip supply agreements and data center buildouts.

JEDEC outlines LPDDR6 expansion for data centers

JEDEC has previewed planned updates to LPDDR6 aimed at pushing the memory standard beyond mobile devices and into selected data center and accelerated computing use cases. The roadmap includes higher-capacity packaging options, flexible metadata support, 512 GB densities, and a new SOCAMM2 module standard.

Tsmc debuts A13 process technology

Tsmc has introduced its A13 process at its 2026 North America Technology Symposium as a tighter version of A14 aimed at next-generation Artificial Intelligence, high performance computing, and mobile designs. The company positions the node as a more compact and efficient option with backward-compatible design rules for faster migration.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.