Artificial Intelligence tools fuel surge in deepfake and phishing fraud in New Zealand

Cybersecurity firm Norton reports that criminals in New Zealand are rapidly adopting artificial intelligence to upgrade existing scams, using voice cloning, deepfakes, and AI-built phishing sites to drive millions in losses and test traditional verification controls.

Artificial Intelligence tools are reshaping fraud in New Zealand, with criminals using voice cloning, deepfakes, and automated website creation to intensify long-standing scams across voice, video, web, and messaging channels. Cybersecurity firm Norton says Artificial Intelligence and deepfake technology are being applied to impersonation, phishing, romance fraud, and investment schemes that now target both individuals and organisations. The company highlights five key Artificial Intelligence enabled fraud types in 2025: voice cloning, AI-built phishing websites, AI-assisted romance scams, business email compromise using synthetic media, and fake celebrity endorsement schemes. Norton notes that hundreds of thousands of Artificial Intelligence generated scam sites have appeared globally this year, and NCSC figures show direct scam and fraud losses of 5.7 million in New Zealand in the most recent quarter, a trend that is increasingly relevant to cyber, crime, and professional liability insurance portfolios.

Voice cloning has become a prominent threat as widely available tools can recreate a person’s voice from only a short audio sample, allowing scammers to pose as relatives, colleagues, or bank staff and apply pressure for urgent financial actions. Citing BNZ, Norton says voice cloning is now regarded as one of the main Artificial Intelligence related scam concerns in New Zealand, as callers may closely reproduce the voices of trusted individuals and undermine informal voice-based checks. At the organisational level, traditional business email compromise is evolving into a multi-channel risk where spoofed emails are combined with Artificial Intelligence generated audio and sometimes synthetic video, trained on public recordings of senior executives. Norton referenced a reported incident at advertising group WPP in which a cloned CEO voice was allegedly used during a video-style call to seek credentials and fund movement authorisation, illustrating how converged email, voice, and video can make fraudulent instructions harder for staff to challenge and raising underwriting questions about payment verification and executive impersonation controls.

On the consumer and SME front, Norton reports an uptick in phishing sites produced with Artificial Intelligence based website tools that mimic banks, delivery firms, and major technology brands, complete with familiar layouts and customer support features. According to Norton, New Zealand has recorded a 416% increase in web skimming attempts this year, and the firm observes hundreds of new malicious Artificial Intelligence generated sites emerging globally each day, often using small URL and brand variations to fool users. Romance and friendship scams are also being reshaped, with Artificial Intelligence chatbots sustaining long-running conversations and deepfake or heavily edited images used as false identity proof. Avast researchers, cited by Norton, found that sextortion scams in New Zealand rose by 137% in early 2025, with attackers using Artificial Intelligence generated deepfake material and breached personal data to threaten exposure unless victims pay. These incidents sit within a broader cyber landscape detailed in the NCSC Cyber Security Insights report for April 1 to June 30, 2025, which recorded 1,315 cyber security incidents, including 514 scams and fraud events and 374 phishing and credential harvesting cases, with direct financial losses of 5.7 million for the quarter, down from 7.8 million, and incidents involving losses of 10,000 or more representing 5.3 million, or 94% of total reported loss, across 50 cases.

58

Impact Score

What businesses need to know about the EU cyber resilience act

The EU cyber resilience act is turning product cybersecurity into a legal requirement for companies that sell digital products into the European Union. A key compliance milestone arrives in September 2026, well before the full regulation takes effect in 2027.

Claude Mythos and cyber insurance’s next inflection point

Claude Mythos is being treated by governments and regulators as a potential systemic cyber risk with implications for financial stability and insurance markets. Its emergence is intensifying pressure on insurers to clarify whether Artificial Intelligence-enabled cyber losses are covered, excluded, or require new stand-alone products.

OpenAI expands ChatGPT ads with self-serve manager

OpenAI is widening its ChatGPT ads pilot with a beta self-serve Ads Manager, new bidding options and broader measurement tools. The push signals a deeper move into advertising as the company expands the program into several international markets.

OpenAI launches Artificial Intelligence deployment consulting unit

OpenAI has created a new consulting and deployment business aimed at helping enterprises build and roll out Artificial Intelligence systems. The move mirrors a similar push by Anthropic and signals a broader effort by model providers to capture more of the enterprise services market.

SK Group warns DRAM shortages could curb memory use

SK Group chairman Chey Tae-won warned that customers may reduce memory consumption through infrastructure and software optimization if DRAM suppliers fail to raise output. Demand from Artificial Intelligence data centers is keeping the market tight as memory makers weigh expansion against the long timelines for new fabs.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.