SoftBank and AMD validate GPU partitioning for artificial intelligence workloads

SoftBank and AMD are jointly validating a GPU partitioning system for AMD Instinct accelerators that allows a single chip to run multiple artificial intelligence workloads in parallel, tuned to each model’s resource needs. The work targets more efficient use of next generation artificial intelligence infrastructure amid manufacturing delays for AMD’s next Instinct generation.

Meta and Nvidia partner on large scale artificial intelligence infrastructure

Meta and Nvidia have signed a multiyear, multigenerational deal to deploy millions of Blackwell and Rubin GPUs in new hyperscale data centers optimized for training and inference workloads. The partnership brings Nvidia CPUs, GPUs and Spectrum-X networking into Meta’s long term artificial intelligence infrastructure roadmap.