Underground artificial intelligence models promise to be hackers ‘cyber pentesting waifu’

A Palo Alto Networks Unit 42 report details a growing underground market for custom, jailbroken, and open-source artificial intelligence models that advertise hacking and dual-use penetration testing capabilities.

Palo Alto Networks’ Unit 42 published a report on November 25, 2025 that examines an expanding underground market for custom, jailbroken, and open-source artificial intelligence models sold on dark web forums. The report finds vendors marketing tools as explicit hacking platforms or dual-use penetration testing utilities, with some offered via monthly or yearly subscriptions and others maintained by developer communities. The models claim to assist with tasks such as scanning for vulnerabilities, encrypting data, exfiltrating data, and writing code.

Unit 42 highlights two recent examples. Starting in September, a new version of WormGPT appeared on underground forums; the jailbroken LLM first emerged in 2023 before its developers went underground. The updated iteration, referenced as WormGPT4 in the report, was advertised as offering capabilities “without boundaries.” The original WormGPT claimed training on malware datasets, exploit writeups, and phishing templates. Unit 42 said WormGPT4 “marks an evolution from simple jailbroken models to commercialized, specialized tools to help facilitate cybercrime,” noting cheap monthly and annual subscriptions, lifetime access costs as little as ?, and an option to purchase the full source code.

Another example is KawaiiGPT, which is available free on GitHub and reportedly took “less than five minutes” to configure on Linux. Branded as “Your Sadistic Cyber Pentesting Waifu,” KawaiiGPT uses a casual tone while delivering malicious outputs and appears to be a copy of an open-source or older commercial model. Unit 42 observed a dedicated community of around 500 developers that update and tweak the tool to maintain effectiveness, and the report characterizes it as an accessible, entry-level yet functionally potent malicious model.

Andy Piazza, senior director of threat intelligence for Unit 42, told CyberScoop that improved artificial intelligence tools highlight their dual-use nature in cybersecurity. Unit 42 also noted limitations: internal tests found much of the malware code generated by these models is easily detectable. Still, researchers warned the real risk is lowering the technical barrier to entry, allowing less-technical actors to ask simple questions and obtain scripts that automate parts of an attack.

65

Impact Score

Industry 5.0 shifts focus to human centric value and sustainability

Industry 5.0 reframes industrial transformation around collaboration between humans and machines, emphasizing growth, resilience, and sustainability over narrow efficiency gains. Many organizations still underinvest in human centric and sustainable use cases despite evidence that they create higher value.

Best artificial intelligence video generators for every creator

Leading artificial intelligence video tools like Sora, Veo 3, Adobe Firefly, Runway and Midjourney target different needs, from free social clips to commercially safe productions, but all come with legal and ethical tradeoffs. Choosing the right platform means balancing price, creative control, output quality and how each service handles your data and copyrights.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.