Cadence Unveils Tensilica NeuroEdge 130 AI Co-Processor for Advanced Physical AI Applications

Cadence´s new Tensilica NeuroEdge 130 is a specialized co-processor boosting efficiency and performance for diverse Artificial Intelligence applications in automotive, robotics, and more.

Cadence has introduced the Tensilica NeuroEdge 130 AI Co-Processor (AICP), engineered to work alongside any neural processing unit (NPU) and facilitate seamless, end-to-end functionality for cutting-edge agentic and physical Artificial Intelligence networks. The processor, targeted at advanced automotive, consumer, industrial, and mobile system-on-chip (SoC) applications, is built on the established Tensilica Vision DSP architecture. This heritage enables the NeuroEdge 130 AICP to achieve over 30% area savings and more than 20% reductions in dynamic power and energy consumption, all while maintaining performance standards. Moreover, the processor integrates with existing software, Artificial Intelligence compilers, libraries, and frameworks, ensuring a reduced time to market for developers and manufacturers.

Industry analysts, such as Karl Freund of Cambrian AI Research, emphasize the increasing importance of NPUs in physical Artificial Intelligence deployments, including sectors like autonomous vehicles, robotics, drones, industrial automation, and healthcare. Freund notes that, although NPUs are adept at handling computationally intensive Artificial Intelligence and machine learning tasks, there exist numerous non-MAC (multiply-accumulate) layers—such as pre- and post-processing jobs—that are more efficiently performed by specialized processors rather than traditional CPUs, GPUs, or DSPs. The NeuroEdge 130 AICP addresses this industry gap by offering optimized co-processing capabilities with a focus on low power consumption and high performance, which also supports future-proofing as Artificial Intelligence requirements evolve.

The NeuroEdge 130 AICP´s compatibility with the established Tensilica software ecosystem is expected to facilitate easier and faster integration for customers. Cadence reports a strong and growing interest in the processor, with multiple customer projects already underway. The introduction of this co-processor positions Cadence as a significant player in the evolution of processor architectures purpose-built for the rapidly growing demands of physical Artificial Intelligence systems across multiple high-growth industries.

63

Impact Score

Samsung shows 96% power reduction in NAND flash

Samsung researchers report a design that combines ferroelectric materials with oxide semiconductors to cut NAND flash string-level power by up to 96%. The team says the approach supports high density, including up to 5 bits per cell, and could lower power for data centers and mobile and edge-Artificial Intelligence devices.

the download: fossil fuels and new endometriosis tests

This edition of The Download highlights how this year’s UN climate talks again omitted the phrase “fossil fuels” and why new noninvasive tests could shorten the nearly 10 years it now takes to diagnose endometriosis.

SAP unveils EU Artificial Intelligence Cloud: a unified vision for Europe’s sovereign Artificial Intelligence and cloud future

SAP launched EU Artificial Intelligence Cloud as a sovereign offering that brings together its milestones into a full-stack cloud and Artificial Intelligence framework. The offering supports EU data residency and gives customers flexible sovereignty and deployment choices across SAP data centers, trusted European infrastructure or fully managed on-site solutions.

HPC won’t be an x86 monoculture forever

x86 dominance in high-performance computing is receding – its share of the TOP500 has fallen from almost nine in ten machines a decade ago to 57 percent today. The rise of GPUs, Arm and RISC-V and the demands of Artificial Intelligence and hyperscale workloads are reshaping processor choices.

A trillion dollars is a terrible thing to waste

Gary Marcus argues that the machine learning mainstream’s prolonged focus on scaling large language models may have cost roughly a trillion dollars and produced diminishing returns. He urges a pivot toward new ideas such as neurosymbolic techniques and built-in inductive constraints to address persistent problems.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.