Rdma for s3-compatible storage accelerates Artificial Intelligence workloads

Rdma for S3-compatible storage uses remote direct memory access to speed S3-API object storage access for Artificial Intelligence workloads, reducing latency, lowering CPU use and improving throughput. Nvidia and multiple storage vendors are integrating client and server libraries to enable faster, portable data access across on premises and cloud environments.

Enterprises are generating vast volumes of unstructured data and Artificial Intelligence workloads are becoming increasingly data-intensive. The article frames object storage as a cost-effective option that historically served archives, backups and data lakes but has lacked the performance needed for fast-paced Artificial Intelligence training and inference. The need for scalable, portable storage between on premises infrastructure and the cloud is driving exploration of new approaches to object storage performance.

Remote direct memory access, or RDMA, for S3-compatible storage is presented as a solution that accelerates the S3 application programming interface-based storage protocol. By offloading data transfers from the host CPU and using RDMA-enabled networking, the approach promises higher throughput per terabyte, improved throughput per watt, lower cost per terabyte and much lower latency than traditional TCP-based transports. Nvidia has developed RDMA client and server libraries; storage partners have incorporated the server libraries into their products and client libraries run on GPU compute nodes to enable faster data access for Artificial Intelligence workloads and better GPU utilization. The article notes that initial libraries are optimized for Nvidia GPUs and networking while the architecture remains open for other vendors and contributors.

Several leading object storage vendors are adopting the technology. Cloudian, Dell Technologies and HPE are integrating RDMA for S3-compatible libraries into HyperStore, ObjectScale and Alletra Storage MP X10000 respectively. Executives quoted in the piece emphasize scalability, portability and reduced total cost of ownership for large-scale Artificial Intelligence deployments and AI factories. Nvidia’s libraries are available to select partners now and are expected to be generally available via the Nvidia CUDA Toolkit in January, alongside information about a new Nvidia object storage certification as part of the Nvidia-Certified Storage program.

68

Impact Score

UK mps open inquiry into artificial intelligence and edtech in education

UK mps have launched a cross party inquiry into how artificial intelligence and education technology are reshaping learning across early years, schools, colleges and universities, and how government should balance innovation with safeguards. The education committee will examine opportunities to improve teaching and workload alongside risks around inequality, privacy, safeguarding and assessment.

Most UK firms see Artificial Intelligence training gap as shadow tool use grows

New research finds that 6 in 10 UK businesses say employees lack comprehensive Artificial Intelligence training, even as shadow use of unapproved tools becomes widespread and investment surges. Executives warn that without stronger skills, governance and strategy, many organisations risk missing out on expected Artificial Intelligence returns.

COSO issues internal control roadmap for governing generative artificial intelligence

COSO has released governance guidance that applies its Internal Control-Integrated Framework to generative artificial intelligence, offering audit-ready control structures and implementation tools for organizations. The publication details capability-based risk mapping, aligned controls, and practical templates to help institutions manage emerging technology risks.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.