Canonical and Intel scrap GPU security mitigations to boost graphics performance

Canonical and Intel are removing security mitigations in Ubuntu 25.10 that had throttled GPU performance, while ASRock unveils a new Radeon card aimed at edge Artificial Intelligence workloads.

Canonical, the company behind Ubuntu, is collaborating with Intel to discontinue certain security mitigations for Intel GPU compute stacks in Ubuntu 25.10, aiming to recover significant performance that had been lost to previous security trade-offs. These mitigations, originally implemented to patch vulnerabilities, were reportedly reducing GPU compute capabilities by up to 20 percent, impacting technologies such as OpenCL and Intel´s Level Zero stack. The decision signals renewed focus on raw performance for scientific and Artificial Intelligence workloads, potentially offering developers and researchers faster compute without the drag previously imposed by these patches.

Meanwhile, ASRock has pulled back the curtain on its latest graphics card targeting the growing demand for professional visualization and Artificial Intelligence at the edge. The Radeon AI PRO R9700 Creator features a 4 nm Navi 48 silicon, sporting 64 compute units corresponding to 4,096 stream processors, 128 dedicated accelerators for Artificial Intelligence, and 64 ray tracing units. Its 32 GB memory is a standout, designed to handle larger machine learning models and edge inference tasks, signaling ASRock´s ambitions to carve out a niche in professional and prosumer markets increasingly shaped by the requirements of local Artificial Intelligence acceleration.

Alongside these major moves, the graphics world is buzzing with grassroots innovation and competitive spectacle. AMD´s FSR 4 upscaling technology has been unofficially coaxed onto last-generation RDNA 3 hardware by enterprising users, demonstrating improved upscaling but also causing a noticeable hit to frame rate. At SIGGRAPH and HPG 2025, Intel showcased innovation in real-time rendering with a trillion-triangle demo and live Artificial Intelligence-driven denoising on the ARC B580 GPU. In parallel, Nvidia is cementing a controversial pattern, continuing GDDR6 and 8 GB limitations for its entry-level RTX 5050, sparking debate over maintaining generational value for budget-conscious users. Collectively, these developments point to a dynamic and at times contentious era in graphics hardware, as device makers, developers, and users push for innovation at the edge of performance, price, and capability.

67

Impact Score

Intel unveils massive artificial intelligence processor test vehicle showcasing advanced packaging

Intel Foundry has revealed an experimental artificial intelligence chip test vehicle that uses an 8 reticle-sized package with multiple logic and memory tiles to demonstrate its latest manufacturing and packaging capabilities. The design highlights how Intel intends to build next-generation multi-chiplet artificial intelligence and high performance computing processors with advanced interconnects and power delivery.

Reward models inherit value biases from large language model foundations

New research shows that reward models used to align large language models inherit systematic value biases from their pre-trained foundations, with Llama and Gemma models diverging along agency and communion dimensions. The work raises fresh safety questions about treating base model choice as a purely technical performance decision in Artificial Intelligence alignment pipelines.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.