Linux kernel 7.0 accelerates file cache memory reclaim for data heavy workloads

Linux kernel v7.0 introduces major improvements to how cached file memory is reclaimed under pressure, cutting reclaim times significantly on both Arm64 and x86 servers. The change targets large scale data and Artificial Intelligence workloads that keep tens or hundreds of gigabytes of file data in RAM.

Linux kernel v7.0 introduces a substantial speedup in reclaiming system memory that is used for caching large files, targeting scenarios where servers keep large datasets in RAM to avoid frequent storage access. According to notes on the kernel mailing list, a new set of patches queued for the Linux 7.0 merge window showed reclaim speed improvements of up to 75% in testing. This focuses on improving the efficiency of freeing cached file data when memory pressure rises, aiming to reduce stalls and latency during cleanup.

In one benchmark, developers allocated 10 GB of file-backed data in memory and then reclaimed 8 GB of it. On a 32-core Arm64 server, the reclaim process completed about 75% faster compared to the older Linux implementation, while on an x86 machine, the improvement was reported at over 50%. These results highlight that both Arm64 and x86 architectures benefit meaningfully from the new reclaim logic, with especially strong gains on many core Arm64 servers commonly used in cloud environments.

The optimization is particularly relevant for systems running large databases or other memory intensive services, where the kernel may keep tens or even hundreds of gigabytes of frequently accessed file data in RAM for faster access. When memory pressure builds and some of that cached data must be freed, the cleanup process now finishes significantly faster, reducing the impact on running workloads. While typical consumer systems are unlikely to notice a difference, hyperscalers, high performance computing simulations, Artificial Intelligence runs, and other heavy data processing workloads can see a significant performance boost. The improvement was authored by Baolin Wang of Alibaba, who focused on optimizing how the kernel handles large blocks of cached file memory.

55

Impact Score

Mustafa Suleyman says Artificial Intelligence compute growth is still accelerating

Mustafa Suleyman argues that Artificial Intelligence development is being propelled by simultaneous advances in chips, memory, networking, and software efficiency rather than nearing a hard limit. He contends that rising compute capacity and falling deployment costs will push systems beyond chatbots toward more capable agents.

China and the US are leading different Artificial Intelligence races

The US leads in large language models and advanced chips, while China has built a major advantage in robotics and humanoid manufacturing. That balance is shifting as Chinese developers narrow the gap in model performance and both countries push to combine software and machines.

Congress weighs Artificial Intelligence transparency rules

Bipartisan lawmakers are pushing a federal transparency standard for the largest Artificial Intelligence models as Congress works on a broader national framework. The proposal aims to increase public trust while avoiding stricter state-by-state requirements and heavier regulation.

Report finds California creative job losses are not driven by Artificial Intelligence

New research from Otis College of Art and Design finds California’s recent creative industry job losses stem from cost pressures and structural shifts, not direct worker displacement by generative Artificial Intelligence. The technology is changing workflows and expectations, but it is largely replacing tasks rather than entire jobs.

U.S. senators propose broader chip tool export ban for Chinese firms

A bipartisan proposal in the U.S. Senate would shift semiconductor equipment controls from specific fabs to targeted Chinese companies and their affiliates. The measure is aimed at cutting off access to advanced lithography and other wafer fabrication tools for firms such as Huawei, SMIC, YMTC, CXMT, and Hua Hong.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.