Introducing Aardvark: OpenAI’s agentic security researcher

OpenAI has introduced Aardvark, an agentic security researcher powered by GPT‑5 and now available in private beta to find, validate, and help patch vulnerabilities across codebases. The system uses Large Language Model reasoning and commit-level scanning to deliver prioritized findings and Codex-generated patches for human review.

Aardvark is an agentic security researcher built by OpenAI and powered by GPT‑5, now offered in a private beta. OpenAI positions Aardvark as a breakthrough in Artificial Intelligence and security research designed to scale defensive work across enterprise and open-source codebases. The agent continuously analyzes repositories to produce a threat model, detect vulnerabilities, assess exploitability, prioritize severity, and propose targeted fixes while integrating with existing developer workflows.

Rather than relying on traditional program analysis techniques such as fuzzing or software composition analysis, Aardvark uses Large Language Model reasoning and tool use to understand code behavior in ways a human researcher might: reading code, writing and running tests, using tools, and annotating findings. Its multi-stage pipeline includes full-repository analysis to build a threat model, commit scanning that inspects commit-level changes against the repository and threat model (including initial historical scans when a repo is first connected), sandboxed validation to attempt triggering identified vulnerabilities, and patching support. For fixes, Aardvark integrates with OpenAI Codex to generate candidate patches, attaches a scanned patch to each finding, and provides step-by-step explanations and annotated code to support human review and one-click patching workflows. It also integrates with GitHub and other existing developer tools to surface clear, actionable insights without slowing development.

OpenAI reports that Aardvark has run across internal codebases and with external alpha partners for several months, surfacing meaningful issues that sometimes require complex conditions to trigger. In benchmark testing on “golden” repositories, Aardvark identified 92% of known and synthetically introduced vulnerabilities. Applied to open-source projects, it has discovered vulnerabilities that led to responsible disclosure, including ten findings that received Common Vulnerabilities and Exposures identifiers. OpenAI plans pro-bono scanning for select non-commercial open-source repositories and has updated its outbound coordinated disclosure policy. Select partners can apply to join the private beta to help refine detection accuracy, validation workflows, and reporting experience.

68

Impact Score

Tesla plans terafab for Artificial Intelligence chips

Tesla is moving toward a large-scale chip manufacturing project to support its autonomous driving roadmap. Elon Musk said the terafab effort for Artificial Intelligence chips will launch in seven days and may involve Intel, TSMC and Samsung.

Timeline traces evolution, civilisation and planetary stewardship

A sweeping chronology links cosmology, evolution, human history and modern environmental risk in a single long view of the human condition. The sequence culminates in contemporary debates over climate change, biodiversity loss and artificial intelligence governance.

Wolters Kluwer report tracks Artificial Intelligence shift in legal work

Wolters Kluwer’s 2026 Future Ready Lawyer findings show Artificial Intelligence has become a foundational tool across law firms and corporate legal departments. The survey points to measurable time savings, revenue growth, and rising pressure to strengthen training, ethics, and security.

Anthropic March 2026 release roundup

Anthropic rolled out a broad set of March 2026 updates across Claude Code, the Claude Developer Platform, Claude apps, and enterprise partnerships. Changes focused on larger context windows, workflow improvements, reliability fixes, visual output features, and new partner enablement programs.

China renews push to lead in technology and Artificial Intelligence

China’s 15th five-year plan elevates science and technology as core national priorities, with a strong emphasis on self-reliance and Artificial Intelligence. The blueprint signals heavier investment, broader industrial support, and a more confident bid to shape global technology standards.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.