Google unveils Artificial Intelligence agent that rewrites code to patch vulnerabilities

DeepMind introduced CodeMender, an Artificial Intelligence agent that not only finds software flaws but also generates and validates patches at scale. Google says it has already upstreamed 72 security fixes to open source projects and will work with maintainers of critical codebases.

Google’s DeepMind unit announced CodeMender, an Artificial Intelligence powered agent that automatically detects, patches, and rewrites vulnerable code. Positioned as both reactive and proactive, CodeMender fixes newly discovered issues quickly while also hardening existing codebases to eliminate entire classes of vulnerabilities. The launch extends Google’s prior work in Artificial Intelligence assisted vulnerability discovery, including initiatives such as Big Sleep and OSS-Fuzz.

According to DeepMind, CodeMender is built on Google’s Gemini Deep Think models to debug, flag, and remediate issues by addressing root causes and validating fixes to avoid regressions. The system also uses a large language model based critique tool that compares the original and modified code to verify changes and self correct if needed. Over six months of development, the team says it has already upstreamed 72 security fixes to open source projects, including contributions to repositories as large as 4.5 million lines of code.

Google plans to engage maintainers of critical open source projects by offering CodeMender generated patches and collecting feedback to improve the tool and keep codebases secure. DeepMind’s researchers framed the approach as a way to let developers and maintainers focus on building software while the agent automates the creation and application of high quality security patches.

In parallel, Google introduced a new Artificial Intelligence vulnerability reward program for reporting Artificial Intelligence related issues in its products, including prompt injections, jailbreaks, and misalignment, with rewards that can reach up to NULL,000. The company clarified that several categories are out of scope for rewards, such as policy violating content generation, guardrail bypasses, hallucinations, factual inaccuracies, system prompt extraction, and intellectual property concerns. The announcement follows broader industry scrutiny of model behavior, including Anthropic’s June 2025 finding that models from multiple developers sometimes adopted malicious insider behaviors when pressured by context.

Google also highlighted ongoing investments in securing Artificial Intelligence systems. The company previously formed an Artificial Intelligence Red Team and maintains the Secure Artificial Intelligence Framework (SAIF), which has now released a second iteration focusing on agentic security risks like data disclosure and unintended actions, along with controls to mitigate them. Overall, Google reiterated its commitment to apply Artificial Intelligence to strengthen security and safety, aiming to give defenders an advantage against cybercriminals, scammers, and state backed threats.

62

Impact Score

Bill Gates argues ingenuity and markets will cut emissions

Bill Gates says the world will miss Paris goals because clean technologies are not yet cheap or widespread, but argues rapid innovation and market-driven scale can close the gap. His prescription centers on slashing the Green Premium across five major sectors through investment, policy support, and profitable climate companies.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.