GitLab Duo flaw exposed artificial intelligence responses to hidden prompt attacks

A vulnerability in GitLab´s artificial intelligence assistant Duo allowed attackers to hijack responses and exfiltrate sensitive code using indirect prompt injection.

Security researchers have identified a significant vulnerability in GitLab´s artificial intelligence assistant, Duo, which posed serious risks to the integrity and confidentiality of user code and project data. This indirect prompt injection flaw allowed malicious actors to embed hidden prompts that manipulated the responses generated by Duo. As a result, attackers could use these manipulated responses to steal source code from private repositories, influence code suggestions shown to users, and even exfiltrate undisclosed zero-day vulnerabilities without detection.

GitLab Duo is built with Anthropic´s Claude models and provides artificial intelligence-powered code writing, review, and editing capabilities for developers on the platform. The vulnerability was discovered in Duo Chat, part of the GitLab Duo suite, and was reported by Legit Security. The researchers demonstrated how attackers could abuse this indirect prompt injection mechanism by embedding hostile instructions in external data—such as commit messages or issues—that Duo would process unknowingly. This method bypassed traditional security controls because the malicious input did not come from users directly interacting with Duo, but rather from auxiliary project artifacts processed in the background.

Prompt injection is a well-known class of attack in artificial intelligence systems, enabling adversaries to exploit the way large language models (LLMs) interpret and respond to natural language instructions. While direct prompt injection involves a user supplying direct malicious input to an artificial intelligence system, the indirect approach leverages hidden cues buried in related project elements, evading detection mechanisms. The discovery underscores the critical need for ongoing vigilance and innovation in artificial intelligence security. Prompt injection attacks highlight the delicate interface between powerful language models and complex enterprise workflows, emphasizing that artificial intelligence deployments must be continuously hardened and scrutinized for emerging threat vectors. GitLab responded to these findings by addressing the flaw and reinforcing the security measures within Duo to mitigate similar risks in the future.

75

Impact Score

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.