Palit introduced its new Pandora NXNano mini PC during Computex 2025, showcasing an ultra-compact platform powered by the NVIDIA Jetson Orin NX Super system-on-module. Central to the NXNano is an eight-core Arm Cortex-A78AE processor paired with a 1,024-core Ampere-architecture GPU, featuring 32 Tensor Cores. This setup enables the device to deliver up to 157 TOPS of sparse Artificial Intelligence throughput and 78 TOPS of dense computing performance, positioning the mini PC as a notable contender for demanding edge Artificial Intelligence applications.
To ensure data-intensive Artificial Intelligence models and large-scale datasets operate efficiently, the NXNano comes equipped with up to 16 GB of LPDDR5 memory, providing 102.4 GB/s bandwidth, and a pre-installed 128 GB PCIe Gen 4 solid-state drive. The device offers extensive connectivity options for versatility and integration: dual Gigabit Ethernet ports, multiple high-speed USB 3.2 Gen 2 ports (Type-A and Type-C OTG), USB 2.0, HDMI 2.0 for display, and four M.2 slots that allow the addition of storage, Wi-Fi, 5G/LTE, or video-capture modules. Additionally, users can access an 8-lane MIPI CSI-2 camera interface and a suite of I/O headers supporting I²C, SPI, UART, GPIO, and CAN Bus, catering to a broad range of sensor and peripheral requirements.
The NXNano is encased in a robust aluminium chassis measuring 145 × 123 × 66 mm and weighing just 470 grams. Its ´superior thermal design´, which incorporates two 50 mm high-efficiency fans, keeps temperatures in check during sustained Artificial Intelligence workloads. Both the base and side panels are removable, enabling custom modifications such as 3D-printed shells or the addition of expansion modules. With support for DC input ranging from 12 to 36 volts, the NXNano is designed for a variety of edge deployment scenarios, including retail automation, digital signage, robotics, and education. Its combination of enterprise-grade performance and compact, customizable design targets developers and organizations seeking reliable Artificial Intelligence inference at the network edge.
