AAEON plans BOXER-8741AI with NVIDIA Jetson Thor for embedded artificial intelligence

AAEON has revealed development plans for the BOXER-8741AI, its first product built on the NVIDIA Jetson Thor, targeting embedded artificial intelligence use cases that require high real-time throughput. The company expects sample testing in September and mass production in November.

AAEON has announced the BOXER-8741AI, its first product built on the NVIDIA Jetson Thor module. The system pairs a custom carrier board with an integrated Jetson Thor module to deliver a generational performance and scalability uplift for embedded artificial intelligence application development. The announcement frames the BOXER-8741AI as a platform for solutions that demand sustained, real-time compute and multi-sensor processing.

At the heart of the platform is NVIDIA´s Blackwell architecture. AAEON cites figures from the integrated module showing up to 7.5x higher artificial intelligence compute and 3.5x greater energy efficiency versus the NVIDIA Jetson AGX Orin, with peak performance of up to 2070 FP4 TFLOPS of artificial intelligence compute. AAEON positions the BOXER-8741AI for advanced deployments including humanoid robotics, smart healthcare systems, and autonomous machines, where higher compute density and efficiency can enable more complex models and lower latency inference.

The company has provided a development timeline and I/O details. The BOXER-8741AI is expected to be available for sample testing in September, with full mass production slated for November. The carrier board includes four QSFP28 ports supporting 25GbE and four RJ-45 ports, three of which provide 1GbE and one that provides 5GbE. Those wired networking options are intended to support high-throughput data transfer from multiple synchronized real-time vision sensors and other high-bandwidth devices.

Overall, AAEON´s BOXER-8741AI combines a custom board design with NVIDIA´s latest module to target edge deployments that require high-performance, energy-efficient artificial intelligence processing and robust multi-sensor connectivity. The product schedule and port selections signal a focus on real-time vision and robotics applications where deterministic, high-bandwidth networking is important.

70

Impact Score

Samsung to supply half of NVIDIA’s SOCAMM2 modules in 2026

Hankyng reports Samsung Electronics has secured a deal to supply half of NVIDIA’s SOCAMM2 modules in 2026 for the Vera Rubin Superchip, which pairs two ‘Rubin’ Artificial Intelligence GPUs with one ‘Vera’ CPU and moves from hardwired memory to DDR5 SOCAMM2 modules.

NVIDIA announces CUDA Tile in CUDA 13.1

CUDA 13.1 introduces CUDA Tile, a virtual instruction set for tile-based parallel programming that raises the programming abstraction above SIMT and abstracts tensor cores to support current and future tensor core architectures. The change targets workloads including Artificial Intelligence where tensors are a fundamental data type.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.