Intel launches Xeon 600 workstation processors

Intel has launched its Xeon 600 series Granite Rapids-WS processors for workstations and high-end desktops, with a focus on Artificial Intelligence development, AVX-512, and large PCIe connectivity. The lineup scales up to 86 Redwood Cove performance cores and supports high memory and I/O capacity.

Intel has launched the Xeon 600 series Granite Rapids-WS processors for workstations and HEDTs. These processors mainly target Artificial Intelligence development, where you take advantage of Intel AMX (FP16) accelerators, and a fully-fledged AVX-512 instruction set, along with a large PCIe I/O. The key talking point with this processor is its Compute complex.

The processor comes with up to 86 ‘Redwood Cove’ P-cores. These are the same P-cores powering Core Ultra ‘Meteor Lake’ processors, and feature Hyper-Threading, besides a full-fat AVX-512 pipeline. The top-speed SKU hence comes with an 86-core/172-thread configuration, with a maximum Turbo Boost frequency of 4.80 GHz. Intel claims that the core-configuration offers a 61% multithreaded performance gain over the previous generation.

The Intel Xeon 600 series is built in the LGA4710 package, and supports Intel W890 chipset. It is configured with an 8-channel DDR5 memory interface (16 sub-channels), supporting up to 4 TB of ECC DDR5 memory, with speeds of up to DDR5-8000 being supported. The processor’s PCIe root complex puts out 128 PCI-Express Gen 5 lanes. Intel is also offering CPU overclocking features with these processors.

There are as many as 11 processor models, with core-counts ranging from the top 86-core/172-thread down to 12-core/24-thread, all models come with 8-channel ECC DDR5 memory and PCI-Express 5.0 x128. All processor models support Intel vPro remote manageability feature-set.

55

Impact Score

AMD plans specialized EPYC CPUs for Artificial Intelligence, hpc, and cloud

AMD is preparing a broader EPYC strategy with task-specific server CPUs aimed at agentic Artificial Intelligence, hpc, training and inference, and cloud deployments. The shift starts with the Zen 6 generation and adds Verano as an Artificial Intelligence-focused variant within the same EPYC family.

Nvidia expands spectrum-x ethernet with open mrc protocol

Nvidia is positioning Spectrum-X Ethernet as a foundation for large-scale Artificial Intelligence training, with Multipath Reliable Connection adding open, multi-path RDMA transport for higher resilience and throughput. OpenAI, Microsoft and Oracle are among the organizations using the technology in large Artificial Intelligence environments.

Anthropic explores Fractile chips to diversify supply

Anthropic is reportedly in early talks with London-based Fractile to secure high-performance Artificial Intelligence chips for inference workloads. The move would reduce reliance on Nvidia and broaden the company’s hardware supply chain.

OpenAI curbs odd creature references in chatbot responses

OpenAI has adjusted its models after users complained about overly familiar responses and strange references to goblins, gremlins, pigeons, and raccoons. The company traced the behavior to a retired “nerdy” personality whose habits spread into broader model training.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.