Enfabrica launches ethernet-based memory fabric for artificial intelligence workloads

Enfabrica´s new EMFASYS system connects high-speed Ethernet with CXL-based DDR5 memory, aiming to boost large-scale artificial intelligence compute efficiency.

Enfabrica Corporation has introduced its Elastic Memory Fabric System, named EMFASYS, marking the first commercial deployment of a memory fabric that integrates high-performance RDMA Ethernet networking with a vast array of parallel ComputeExpressLink (CXL) DDR5 memory channels. This standalone appliance is engineered to enhance compute efficiency for large-scale, memory-bound artificial intelligence inference workloads, and is accessible to any GPU server at low, predictable latency using existing network infrastructure.

Driven by the surging demand for generative, agentic, and reasoning-intensive artificial intelligence workloads—which are outpacing prior large language model deployments by a factor of 10 to 100 in terms of compute requirements—EMFASYS addresses the bottleneck around GPU and High-Bandwidth-Memory resource utilization in modern compute racks. The system achieves this by offloading high-bandwidth memory to commodity DRAM using a sophisticated caching hierarchy, load-balancing token generation across distributed artificial intelligence servers, and minimizing the risk of underutilized, stranded GPU capacity. The result is a system that can elastically scale to meet growing user, agent, and context volumes while delivering efficiency gains at the memory layer.

When deployed with Enfabrica´s proprietary remote memory software stack, EMFASYS can deliver up to 50 percent lower cost per token per user. This enables large artificial intelligence and foundational large language model providers to offer more compelling price-to-performance ratios while accommodating the swelling volume of inference calls within cloud infrastructures. With this launch, Enfabrica establishes a new architectural standard for efficiently scaling memory resources in high-density artificial intelligence applications.

77

Impact Score

Analog computing from waste heat

MIT researchers developed an analog computing approach that uses waste heat in electronic devices to process data without electricity. The technique performs matrix vector multiplication with strong accuracy and could also help monitor heat in chips without extra energy use.

How Artificial Intelligence is reshaping financial services oversight

Financial services regulators are largely treating Artificial Intelligence as another technology governed by existing rules rather than building new securities-specific frameworks. History suggests that clearer expectations will emerge through examinations, enforcement, and supervisory guidance.

Nvidia faces gamer backlash over Artificial Intelligence shift

Nvidia is facing growing frustration from gamers as memory supply is steered toward data center chips and DLSS 5 becomes more central to game performance. The dispute highlights how far the company’s priorities have shifted toward enterprise Artificial Intelligence.

Executives see limited Artificial Intelligence productivity gains so far

Corporate enthusiasm around Artificial Intelligence has yet to translate into broad gains in employment or productivity, reviving comparisons to the long lag between early computing breakthroughs and measurable economic impact. Recent surveys and studies show mixed results, with strong expectations for future benefits but little consensus on present gains.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.