Companies that depend on artificial intelligence generated code are being warned that new development processes can inject additional risk into already complex software supply chains, in ways that traditional application security programs were never designed to address. Black Duck research found that 95% of organizations rely on artificial intelligence tools to generate code, yet only 24% apply comprehensive IP, license, security, and quality evaluations to that artificial intelligence generated code. The report concludes that this divergence leaves the software supply chain increasingly vulnerable, as security practices fail to keep pace with a new era of rapid software innovation fueled by artificial intelligence.
The Black Duck findings highlight several practices that correlate with stronger readiness for open source and third party security. Dependency management drives readiness, as teams with strong dependency tracking are 85% more highly prepared to secure open source vs. 57% overall. Automation also speeds remediation, since organizations with automatic continuous monitoring fix critical vulnerabilities within a day 60% of the time, compared with 45% across all respondents. When organizations validate supplier SBOMs, 63% said they are highly prepared to evaluate third party software and 59% remediate critical issues within a day. Compliance maturity matters because using at least three compliance controls boosts one day remediation rates to 49%, rising to 54% with four or more, even as 35% cite regulatory complexity as a top challenge.
Experts say security teams should assume that artificial intelligence generated code expands software supply chain risk, not just development speed, creating blind spots in provenance, obligations and exploitable flaws, and amplifying dependency sprawl and opaque third party components. To close the gap, organizations are urged to treat artificial intelligence output like third party software and enforce the same controls by default inside developer workflows, starting with rigorous dependency management, automatic continuous monitoring and non optional SBOM validation for suppliers. Looking ahead, Saumitra Das of Qualys said analysts expect that 95% of code will be artificial intelligence generated by 2030, noting that it is reported that about 30% of code at large enterprises is generated by artificial intelligence, while it is close to 90% to 95% at small startups in 2025. Because humans cannot reasonably review such volumes for correctness, functionality, readability and security issues, Das argues that the industry will need diverse review models, more model context protocol automation, evolved QA processes and better guarantees from artificial intelligence model providers about the training data and licensing behind generated code.
