Phison Electronics announced at GTC, at booth 119, a showcase focused on multi-tier memory architecture for NVIDIA-powered local Artificial Intelligence platforms. The company is targeting a growing memory constraint as demand for Artificial Intelligence-ready platforms continues to rise, particularly for workloads involving larger models and long-context inference.
Fine-tuning and inference on proprietary data require massive compute and memory resources, creating investment challenges for organizations. Rising solution costs and workflow bottlenecks are slowing time-to-market for revenue-generating innovation. To address this challenge, Phison introduced aiDAPTIV technology for local and edge Artificial Intelligence use cases, using Pascari SSDs as a new Artificial Intelligence memory tier.
Phison says aiDAPTIV intelligently extends and manages Artificial Intelligence working memory across GPU memory, system RAM and flash. The company presented the technology as a way to apply multi-tier memory architecture principles to local Artificial Intelligence systems as NVIDIA infrastructure advances GPU memory capabilities for inference workloads in data center environments.
Built on high-endurance flash optimized for sustained paging and context retention, aiDAPTIV is designed to support memory-intensive inference and fine-tuning workloads under fixed hardware configurations. Phison says the flash-based memory tier enables organizations to support evolving workloads on local systems while maintaining data privacy and improving long-term infrastructure efficiency.
