NinjaTech Artificial Intelligence and Cerebras launch Fast Deep Coder

NinjaTech Artificial Intelligence and Cerebras Systems announced Fast Deep Coder, a development solution the companies say speeds software creation by 5-10x and shortens iteration cycles from 10-15 minutes to 1-2 minutes.

NinjaTech Artificial Intelligence, a Silicon Valley-based agentic Artificial Intelligence company, in partnership with Cerebras Systems has launched Fast Deep Coder, a development solution designed to accelerate software creation by 5-10x compared with traditional coding workflows. The product follows NinjaTech´s recent release of Fast Deep Research and pairs a Cerebras-accelerated reasoning model with NinjaTech’s SuperNinja virtual machine environment. According to the announcement, Fast Deep Coder is intended to transform knowledge work by enabling much faster plan-code-validate cycles and by keeping developers in a productive flow.

Fast Deep Coder operates inside SuperNinja, an AI agent coupled with a dedicated virtual machine per thread, and leverages Cerebras wafer-scale inference technology to sustain rapid, responsive reasoning. The system reduces iteration cycles from 10-15 minutes to 1-2 minutes, enabling developers to build, run, fix, and ship code while maintaining context. Key engineering features noted in the release include dedicated cloud VMs with preconfigured development tools to free local resources, end-to-end visibility into code execution, logs and test results, persistent project context between iterations, and a VM-based execution environment intended to provide security boundaries. The product also includes native GitHub integration for branching, commits and pull requests.

The announcement includes benchmark results for Fast Deep Coder (Qwen3-480B) evaluated on SWE-Bench at 500 iterations, where it scored 69.6 percent. The release compares that score with Sonnet 3.7 at 70.3 percent and Gemini 2.5 Pro at 63.2 percent, and emphasizes that Fast Deep Coder delivers comparable accuracy while operating at substantially faster inference speeds. NinjaTech and Cerebras positioned these performance and workflow gains as drivers of business benefits such as reduced development costs, faster time-to-market, improved resource utilization and higher product quality, enabled by the SuperNinja environment that allows the agent to run editors, shells and observable execution workflows exclusively within the VM.

72

Impact Score

How Intel became central to America’s Artificial Intelligence strategy

The Trump administration took a 10 percent stake in Intel in exchange for early CHIPS Act funding, positioning the struggling chipmaker at the core of U.S. Artificial Intelligence ambitions. The high-stakes bet could reshape domestic manufacturing while raising questions about government overreach.

NextSilicon unveils processor chip to challenge Intel and AMD

Israeli startup NextSilicon is developing a RISC-V central processor to complement its Maverick-2 chip for precision scientific computing, positioning it against Intel and AMD and in competition with Nvidia’s systems. Sandia National Laboratories has been evaluating the technology as the company claims faster, lower power performance without code changes on some workloads.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.