SuperX launches all-in-one multi-model server series

SuperX has released an all-in-one multi-model server series preconfigured with OpenAI´s GPT-OSS models, aimed at delivering enterprise-grade Artificial Intelligence inference and multimodel collaboration.

SuperX announced the launch of its all-in-one multi-model servers, a hardware and software stack designed to run multiple models in tandem for enterprise workloads. The company positions the new servers as out-of-the-box ready, multimodel fused, and tightly integrated with application scenarios, offering a packaged full-stack solution that includes hardware, runtime, and preloaded models. This release follows SuperX´s debut of the XN9160-B200 server on july 30, 2025, and represents an expansion of the vendor´s portfolio toward integrated infrastructure for large model deployments.

The servers arrive preconfigured with high-performance large language models, including OpenAI´s newly released GPT-OSS-120B and GPT-OSS-20B. SuperX says the platform supports dynamic collaboration between models, enabling multimodel intelligent agents to route tasks, combine reasoning skills, and share knowledge across networks. That approach aims to move large model applications from single-model inference to coordinated multimodel workflows, a shift that could simplify development and reduce integration overhead for complex enterprise use cases.

SuperX highlighted benchmark results tied to OpenAI´s aug 5, 2025 press release, noting that GPT-OSS-120B attains and in some tests such as MMLU and AIME surpasses the performance of several leading closed-source models. The company frames that performance as a way to deliver world-class inference and knowledge processing at superior cost efficiency, especially important for enterprises balancing accuracy, latency, and operating expenses. Security and customization are also emphasized, with the new servers offered in specifications tailored to organizations of various scales.

For buyers the offering represents an integrated alternative to assembling separate compute, model licensing, and orchestration layers. SuperX is marketing the MMS as an enterprise-grade solution for generative artificial intelligence adoption, especially where multimodel collaboration, on-premise control, or predictable total cost of ownership matter. The announcement does not include detailed pricing or a full set of technical specifications; interested enterprise clients are directed to SuperX for deployment options and support agreements.

65

Impact Score

This AI won’t drain your battery

Google DeepMind´s Gemma 3 270M promises on-device Artificial Intelligence that uses almost no phone battery, while Sam Altman lays out OpenAI´s post-GPT-5 strategy.

###CFCACHE###

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend