Grok Code Fast 1 targets fast, low cost coding with new model architecture

Grok Code Fast 1 is a new reasoning model from xAI that focuses on agentic coding, tool use, and full stack development support, with pricing and context limits aimed at economical large scale usage.

Grok Code Fast 1 is presented as a speedy and economical reasoning model optimized for agentic coding tasks. Built from scratch with a brand new model architecture, it is trained on a pre training corpus that is rich with programming related content and post training datasets that mirror real world pull requests and coding workloads. The model is described as having mastered the use of common developer tools such as grep, terminal commands, and file editing operations, which makes it suitable for integration into integrated development environments and other coding assistants. According to the listing, Grok Code Fast 1 is positioned as exceptionally versatile across the full software development stack, with particular strength in languages including TypeScript, Python, Java, Rust, C++, and Go.

The article notes that Grok Code Fast 1 was released on August 28, 2025, with application programming interface access available through xAI. The performance section references benchmarks and datasets, with scores sourced from the model scorecard, research paper, or official blog posts, although specific benchmark numbers are not detailed in the text. The knowledge cutoff, parameter count, training data specifics, and license details are all marked as unknown or proprietary, indicating that only limited technical transparency is available at this stage. A frequently asked questions section reiterates the same release date information and situates the model within a broader benchmarking hub that covers a wide range of large language models and evaluation suites.

Pricing information is provided for Grok Code Fast 1 through xAI, which is listed with an input price of 0.20 per 1,000,000 tokens and an output price of 1.50 per 1,000,000 tokens, and these values are shown alongside other capability metrics. The maximum input context window is shown as 256.0K tokens and the maximum output length is shown as 10.0K tokens, reflecting a configuration that targets large context coding tasks. The reported latency is 1.38 s and the throughput is 76.41 c/s, which together are presented as indicators of the model responsiveness and capacity under load. The pricing table notes that quantization options for input and output are not yet specified, and that text is the primary modality, with the rest of the media fields left unfilled. The article closes by indicating that application programming interface access for Grok Code Fast 1 through the llm stats gateway is coming soon, with promotional mentions of saving up to 30% on Artificial Intelligence inference costs for companies running Artificial Intelligence products in production.

55

Impact Score

Most UK firms see Artificial Intelligence training gap as shadow tool use grows

New research finds that 6 in 10 UK businesses say employees lack comprehensive Artificial Intelligence training, even as shadow use of unapproved tools becomes widespread and investment surges. Executives warn that without stronger skills, governance and strategy, many organisations risk missing out on expected Artificial Intelligence returns.

COSO issues internal control roadmap for governing generative artificial intelligence

COSO has released governance guidance that applies its Internal Control-Integrated Framework to generative artificial intelligence, offering audit-ready control structures and implementation tools for organizations. The publication details capability-based risk mapping, aligned controls, and practical templates to help institutions manage emerging technology risks.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.