Windsurf expands Artificial Intelligence model lineup and developer tooling in rapid-fire updates

Windsurf has rolled out a steady stream of new Artificial Intelligence coding models, Arena comparison tooling, and workflow automation features, while refining pricing, credits, and performance across its editor and plugins.

Windsurf is evolving its editor into a dense hub for agentic coding by continuously adding new Artificial Intelligence models, richer comparison tools, and tighter integration with development workflows. Recent releases introduced GPT-5.4, GPT-5.3-Codex-Spark, GPT-5.2-Codex, GPT-5.2, GPT-5.1, GPT-5.1-Codex, GPT-5.1-Codex Max, GPT-5-Codex, Gemini 3.1 Pro, Gemini 3 Flash, Gemini 3 Pro, Gemini 2.5 Pro, Gemini 2.0 Flash, Claude Opus 4.6 and 4.6 fast mode, Claude Opus 4.5, Claude Sonnet 4.6, Claude Sonnet 4.5, Claude 3.7 Sonnet, Sonnet 4.5, GLM-5, Minimax M2.5, Kimi K2, DeepSeek-R1, Deepseek-V3, o3-mini, o4-mini, Grok Code Fast 1, Falcon Alpha, SWE-1, SWE-1-lite, SWE-1-mini, SWE-1.5 and SWE-1.5 Free, and internal models like SWE-grep-powered Fast Context. Many of these models use promotional or tiered pricing, such as GPT-5.4 and Claude Opus 4.6 fast mode with multipliers like 10x credits and 12x credits or limited-time 0x credits access for models including GPT-5.2 and GPT-5-Codex for paid users, 0.5x credits for free users, and discounted rates like 0.75x credits and 0.25x credits for GLM-5 and Minimax M2.5.

The company is also building a benchmarking and experimentation layer directly into the IDE. Arena Mode allows developers to run two Cascade agents side by side with hidden model identities, vote on which performs better, and track outcomes via personal and global leaderboards, with curated battle groups such as fast models versus smart models, and participation from models like GPT-5.3-Codex-Spark and Claude Opus 4.6 in Frontier Arena, Fast Arena, and Hybrid Arena. Plan Mode sits alongside Code and Ask as a dedicated planning surface that can create and later sync plan.md implementation files, while administrators can set default models for teams and define organization-wide allow and deny lists for auto-executed commands. Over time, Windsurf introduced features like parallel and simultaneous Cascade sessions, multi-Cascade panes and tabs, voice input to Cascade, @-mentioning conversations, browser tabs, terminals and docs, a dedicated Cascade terminal, Dev Container support, Git worktrees, Fast Context with up to 20x faster context retrieval and >2,800 tokens per second throughput, and a visual context window indicator for long conversations.

On the workflow, governance, and extensibility side, Windsurf significantly expanded Cascade Hooks, Memories, Rules, and Workflows so teams can audit or log prompts, enforce policies, and run custom commands at specific points of an agent’s workflow, including post_cascade_response and post_write_code hooks and a new POST_CASCADE_RESPONSE_WITH_TRANSCRIPT hook. Enterprise and team customers get cloud configuration for Cascade Hooks, system-level rules via MDM, team-wide allow and deny lists for command auto-execution, analytics dashboards, knowledge integrations with Google Docs, Netlify-based Deploys accessible via one-prompt deployments and subdomain management, and support for Devin service keys. Developers can configure Model Context Protocol servers with OAuth, drag and drop images and files into Cascade, use Codemaps to visualize and chat with codebases, generate Mermaid diagrams, and rely on auto-linting, improved diff zones, auto-run terminal commands, expanded tab-based autocomplete through Windsurf Tab, and a “Promo” label that highlights newly available or discounted large language models, all while a long list of patch releases steadily improves stability, startup reliability, rendering, performance, and cross-platform support including Linux ARM64, Windows ARM, WSL, SSH, and JetBrains IDEs.

52

Impact Score

Samsung previews HBM4 at NVIDIA GTC 2026

Samsung is using NVIDIA GTC 2026 to present its broader Artificial Intelligence computing portfolio, led by its sixth-generation HBM4 memory. The company is positioning the lineup as a complete semiconductor offering across memory, logic, foundry and advanced packaging.

MSI launches XpertStation WS300 on NVIDIA DGX Station architecture

MSI has introduced the XpertStation WS300 on NVIDIA DGX Station Architecture as a deskside Artificial Intelligence supercomputer for large language models, generative Artificial Intelligence, and data science workloads. The system combines NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip performance with compact deployment and enterprise-focused networking and memory capacity.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.