Underground artificial intelligence models promise to be hackers ‘cyber pentesting waifu’

A Palo Alto Networks Unit 42 report details a growing underground market for custom, jailbroken, and open-source artificial intelligence models that advertise hacking and dual-use penetration testing capabilities.

Palo Alto Networks’ Unit 42 published a report on November 25, 2025 that examines an expanding underground market for custom, jailbroken, and open-source artificial intelligence models sold on dark web forums. The report finds vendors marketing tools as explicit hacking platforms or dual-use penetration testing utilities, with some offered via monthly or yearly subscriptions and others maintained by developer communities. The models claim to assist with tasks such as scanning for vulnerabilities, encrypting data, exfiltrating data, and writing code.

Unit 42 highlights two recent examples. Starting in September, a new version of WormGPT appeared on underground forums; the jailbroken LLM first emerged in 2023 before its developers went underground. The updated iteration, referenced as WormGPT4 in the report, was advertised as offering capabilities “without boundaries.” The original WormGPT claimed training on malware datasets, exploit writeups, and phishing templates. Unit 42 said WormGPT4 “marks an evolution from simple jailbroken models to commercialized, specialized tools to help facilitate cybercrime,” noting cheap monthly and annual subscriptions, lifetime access costs as little as ?, and an option to purchase the full source code.

Another example is KawaiiGPT, which is available free on GitHub and reportedly took “less than five minutes” to configure on Linux. Branded as “Your Sadistic Cyber Pentesting Waifu,” KawaiiGPT uses a casual tone while delivering malicious outputs and appears to be a copy of an open-source or older commercial model. Unit 42 observed a dedicated community of around 500 developers that update and tweak the tool to maintain effectiveness, and the report characterizes it as an accessible, entry-level yet functionally potent malicious model.

Andy Piazza, senior director of threat intelligence for Unit 42, told CyberScoop that improved artificial intelligence tools highlight their dual-use nature in cybersecurity. Unit 42 also noted limitations: internal tests found much of the malware code generated by these models is easily detectable. Still, researchers warned the real risk is lowering the technical barrier to entry, allowing less-technical actors to ask simple questions and obtain scripts that automate parts of an attack.

65

Impact Score

Q.ANT unveils second-generation photonic processor for Artificial Intelligence

Q.ANT introduced the NPU 2, a second-generation photonic Native Processing Unit that performs nonlinear mathematics in light to boost energy efficiency and performance for Artificial Intelligence and high-performance workloads. The company is selling the processors as integrated 19-inch server solutions with x86 hosts and Linux.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.