Artificial Intelligence and state-backed espionage set to define 2026 cyber threats

Security researchers expect state-backed cyber espionage and artificial intelligence-driven attacks to dominate the 2026 threat landscape, with drones, European defence industries and small and midsize businesses among the main targets.

Cyber security researchers are warning that state-backed espionage and artificial intelligence-driven attacks are likely to define the global threat landscape in 2026, with particular focus on European defence sectors, unmanned systems and small and midsize businesses. Experts from security firm ESET and Cork Cyber predict that major nation states and organised criminal groups will escalate their activity, using more destructive techniques and seeking higher financial returns from phishing, business email compromise and other social engineering campaigns.

Jean-Ian Boutin, director of threat research at ESET, expects unmanned aerial vehicles and other unmanned platforms to sit at the centre of future intelligence operations by the “Big 4” adversaries of China, Russia, Iran and North Korea. He said “In 2026, the proliferation of unmanned aerial vehicles (UAVs) in military and commercial spheres will attract the attention of major threat actors of the Big 4 (China, Russia, Iran, North Korea), seeking to steal intellectual property and gather military intelligence. Russia will maintain a relentless focus on Ukraine’s drone capabilities” and he warned that North Korea and Iran are preparing to ramp up espionage against European and global targets while China is expected to intensify efforts to monitor Taiwan’s UAV build-up. As unmanned surface vehicles and unmanned ground vehicles mature, ESET expects similar espionage patterns and cyber intrusions to emerge around these systems, increasing industrial espionage risk for firms that design sensors, navigation tools and communications components.

Boutin also highlighted a shift in Russian state-linked activity, saying that Russia will keep leveraging cyber criminal groups for espionage and that collaboration between state-sponsored actors is likely to become more frequent. He said “Wiping attacks will persist, targeting energy infrastructure as winter approaches and focusing on the grain sector, which is critical to Ukraine’s economy” and he forecast more Russian cyber operations against European defence contractors, supply chains and critical infrastructure as countries such as Germany, France and Poland pursue major rearmament programmes. Recent technical reporting cited in the article indicates that energy grids and agricultural logistics in Ukraine and allied states remain under pressure from destructive malware and wiper attacks aimed at operational technology and data systems.

Dan Candee, CEO of Cork Cyber, said the growing use of artificial intelligence by attackers will significantly alter everyday threats faced by small and midsize businesses and their IT providers. He stated that “In 2026, cyberattacks are expected to become increasingly driven by artificial intelligence” and that threat actors will use generative Artificial Intelligence to run large-scale, highly targeted phishing campaigns, generate polymorphic malware that evades traditional detection and automate vulnerability exploitation. He added that “In 2025 out of 4.4 million compliance events, 62.5% of Cork Cyber’s payouts to SMBs were from phishing attacks for ACH wire transfer fraud”. The article notes that cloned websites, deepfake audio and convincing fake invoices are already common, and that generative tools lower the barrier for less skilled attackers while enabling rapid customisation of lures.

Candee warned that the financial impact of a serious breach in 2026 could be severe enough to bankrupt some small and midsize firms, with real costs extending far beyond any ransom to include extended downtime, lost revenue, recovery and remediation spending, regulatory penalties and lasting reputational damage. He argued that small and midsize businesses must treat cybersecurity as a core business risk rather than a simple IT line item, and that effective security programmes should be documented, regularly reviewed and aligned with recognised frameworks. Insurance data and incident reports referenced in the article show steady growth in claims tied to business email compromise and payment fraud, which analysts attribute to both rising attacker sophistication and persistent weaknesses in basic controls. Vendors and consultants expect regulators and customers to demand stronger evidence of documented security processes and periodic reviews across supply chains in 2026.

68

Impact Score

OpenAI pauses UK Artificial Intelligence investment plans

OpenAI has paused its role in Stargate UK, a major Artificial Intelligence and infrastructure project tied to a wider £31 billion UK-US investment programme. The decision sharpens concerns about energy costs, regulation, and infrastructure readiness for large-scale tech investment in Britain.

Anthropic launches Claude for small business

Anthropic has introduced a version of Claude aimed at small companies, packaging its model inside common business software and backing the launch with training. The move targets a segment that plays a major economic role but has been slower than large enterprises to adopt Artificial Intelligence.

Colorado approves rewrite of state Artificial Intelligence law

Colorado lawmakers passed SB 26‑189, replacing much of the state’s first-in-the-nation cross-sector Artificial Intelligence framework with a narrower regime centered on automated decision-making transparency and consumer rights. The measure reduces compliance burdens from the 2024 law while preserving attorney general enforcement.

Polis signs regulatory review and Artificial Intelligence bills

Gov. Jared Polis opened his post-session bill signings by approving two measures aimed at improving Colorado’s business climate. One mandates regular reviews of state regulations, while the other rewrites the state’s Artificial Intelligence rules around transparency, human review, and enforcement.

Illinois lawmakers weigh Artificial Intelligence rules

Illinois lawmakers are considering a broad set of Artificial Intelligence proposals focused on consumer protection, privacy, minors, and workplace discrimination. Business groups and technology advocates are pushing for a lighter, more uniform approach as questions linger over federal authority and state enforcement.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.