Phison expands local Artificial Intelligence inferencing with flash memory

Phison is using its GTC showcase to position flash storage as an added memory tier for local Artificial Intelligence systems. The company says its aiDAPTIV technology extends working memory across GPU memory, system RAM and flash to support larger models and long-context inference.

Phison Electronics announced at GTC, at booth 119, a showcase focused on multi-tier memory architecture for NVIDIA-powered local Artificial Intelligence platforms. The company is targeting a growing memory constraint as demand for Artificial Intelligence-ready platforms continues to rise, particularly for workloads involving larger models and long-context inference.

Fine-tuning and inference on proprietary data require massive compute and memory resources, creating investment challenges for organizations. Rising solution costs and workflow bottlenecks are slowing time-to-market for revenue-generating innovation. To address this challenge, Phison introduced aiDAPTIV technology for local and edge Artificial Intelligence use cases, using Pascari SSDs as a new Artificial Intelligence memory tier.

Phison says aiDAPTIV intelligently extends and manages Artificial Intelligence working memory across GPU memory, system RAM and flash. The company presented the technology as a way to apply multi-tier memory architecture principles to local Artificial Intelligence systems as NVIDIA infrastructure advances GPU memory capabilities for inference workloads in data center environments.

Built on high-endurance flash optimized for sustained paging and context retention, aiDAPTIV is designed to support memory-intensive inference and fine-tuning workloads under fixed hardware configurations. Phison says the flash-based memory tier enables organizations to support evolving workloads on local systems while maintaining data privacy and improving long-term infrastructure efficiency.

50

Impact Score

Artificial Intelligence pushes practical change in claims

Claims operations are emerging as a key area where Artificial Intelligence is delivering practical gains through continuous monitoring, better decision-making, and reduced administrative burden. The shift is moving beyond automation toward changes in incentives, workflows, and the link between claims, underwriting, and pricing.

What SerDes does in high-speed chip communication

SerDes converts parallel and serial digital data for high-speed chip-to-chip links while reducing the number of interconnects required. It underpins physical layer connectivity across computing, automotive, mobile, and internet-connected systems.

LG Display starts mass production of 1-120 Hz laptop panel

LG Display has begun mass production of an LCD laptop panel using its Oxide 1 Hz technology. The panel dynamically shifts between low and high refresh rates based on onscreen activity to balance efficiency and responsiveness.

How the UK can strengthen regional climate tech clusters

Regional climate tech clusters across Scotland, the North East and the Midlands are gaining attention as growth engines beyond London. Targeted funding, better-connected accelerators and stronger Artificial Intelligence skills could help regional startups scale more effectively.

Oracle expands Artificial Intelligence database tools for business data

Oracle introduced new agentic Artificial Intelligence capabilities for Oracle Artificial Intelligence Database aimed at helping enterprises build, deploy, and secure production-grade applications on business data. The company is positioning the platform across multicloud, hybrid, and on-premises environments with open standards and stronger data controls.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.