Intel Xeon 6 Chips Power Nvidia´s Latest Data Center Artificial Intelligence System

Intel´s new Xeon 6 processors land a spot in Nvidia´s cutting-edge DGX B300, boosting Artificial Intelligence data center performance.

Intel has unveiled the latest additions to its Xeon 6 central processing unit (CPU) lineup, specifically engineered for server environments that power advanced graphics processing unit (GPU)–accelerated Artificial Intelligence systems. Among the new launches, one Xeon 6 processor has been selected as the host CPU in Nvidia´s DGX B300, the newly announced data center platform specializing in Artificial Intelligence workloads.

The Xeon 6 series introduces upgraded capabilities designed to optimize workload management. Featuring Priority Core Turbo technology, the processors are able to dynamically shift compute tasks between high-priority and low-priority cores. This functionality enables maximum system efficiency and performance, a critical factor for data centers running intensive Artificial Intelligence and machine learning applications. Intel´s move aims to deliver better energy management and support the exponential data processing demands characterizing modern Artificial Intelligence deployments.

Karin Eibschitz Segal, Intel´s corporate vice president and interim general manager of the Data Center Group, emphasized the strengthened partnership with Nvidia, remarking that the collaboration will help accelerate Artificial Intelligence adoption across various industries. The integration of Intel´s processors into Nvidia´s DGX B300 marks a significant milestone in the ongoing race among semiconductor producers to capture ground in the fast-growing data center and Artificial Intelligence infrastructure sector. Despite the technology announcement, Intel´s stock displayed some volatility following the news.

65

Impact Score

Saudi Artificial Intelligence startup launches Arabic LLM

Misraj Artificial Intelligence unveiled Kawn, an Arabic large language model, at AWS re:Invent and launched Workforces, a platform for creating and managing Artificial Intelligence agents for enterprises and public institutions.

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.