A recent investigation warns that depending solely on large language models to generate application code can introduce serious vulnerabilities. Published on August 22, 2025, the research argues that these models are trained on vast internet data sets that include insecure sample code, and they often reproduce those unsafe patterns without alerting developers. The author notes that the problem goes beyond poor snippets: Artificial Intelligence systems lack business context and do not perform threat modeling, so they fail to anticipate abuse cases or implement secure defaults.
The researcher cites multiple examples to illustrate the risks. In one case, a flaw was found in sample code for a pay-per-view plugin offered by a major cryptocurrency platform. Although the vulnerability existed only in the example implementation and not the core library, it could still be copied into production projects and slip past reviews. The centerpiece of the report is a live proof of concept in which client-side JavaScript generated with an Artificial Intelligence assistant exposed an email-sending API endpoint, along with input validation and submission logic, entirely in the browser. Because the endpoint and parameters were publicly visible, an attacker could bypass the intended workflow and validation to send arbitrary requests.
The proof of concept uses a simple cURL command to trigger the exposed endpoint, demonstrating how an attacker could spam email addresses, phish targets, or impersonate trusted senders. When the issue was reported, the hosting provider responded that remediation was out of scope because the vulnerable code came from a third-party example. This underscores the systemic nature of the problem: insecure examples can be learned by models and then reproduced at scale by developers who trust Artificial Intelligence tooling to scaffold applications quickly.
The research concludes that organizations should treat Artificial Intelligence coding assistance as a starting point, not a security authority. Combining model-generated output with rigorous human-led code reviews, explicit threat modeling, and automated security testing is essential to prevent these vulnerabilities from reaching production. As Artificial Intelligence becomes more embedded in development workflows, security must be integrated early, with clear checks to identify exposed endpoints, enforce server-side validation, and ensure that business logic and secrets never reside solely on the client.