EU DisinfoLab’s October roundup lands as the counter-disinformation community heads to Ljubljana for #Disinfo2025 and as platforms beyond the mainstream, including Mastodon and BlueSky, see renewed abuse. The organization flags mixed signals across Europe, from Spain’s pushback on Meta’s advertising practices to a German court win that cements user access to chronological feeds. EU DisinfoLab has refreshed its Conflict and Crisis Hub with lessons from information warfare in Gaza and Ukraine, and updated its impact methodologies to better quantify disinformation reach and influence.
Moldova emerges as a focal point for foreign interference. A BBC undercover probe exposed a Russian-funded scheme paying Moldovans to seed propaganda, while DFRLab linked a new outlet, REST, to the Kremlin-aligned Rybar network. Open Minds documented an artificial intelligence-powered bot operation on Telegram generating more than 62,000 posts targeting President Maia Sandu and the pro-EU PAS party, and NewsGuard traced waves of fabricated voter fraud claims after the September vote despite clean OSCE findings. Beyond Moldova, investigations chronicle a far-right content engine in UK Facebook groups, a Citizen Lab report on an artificial intelligence-enabled campaign stoking unrest in Iran, and a resurgence of vaccine-autism falsehoods amplified after a White House press conference, which spread across X and Instagram.
System-level dynamics continue to shape the information environment. Lupa reports that Russia’s Global Fact-Checking Network reframes fact-checking as sovereignty while flipping narratives against Western institutions. Leaks tied to China-based firms Geedge Networks and GoLaxy show censorship tools and artificial intelligence-driven propaganda sold and deployed at scale. Election transparency took a hit as Google erased its EU political ad archive, while a Dutch court ruled Meta violated the Digital Services Act by steering users to personalised feeds by default and ordered an easy, durable non-profiled option, backed by daily fines. In Brussels, the European Commission’s planned digital omnibus proposes centralised cookie preference management to cut consent fatigue, but civil liberties concerns persist over potential tracking loopholes and the weakening of other obligations.
Artificial Intelligence is increasingly at the center of legal and market battles. Penske Media sued Google over artificial intelligence-generated search summaries, German and EU media filed a Digital Services Act complaint over AI Overviews, and Encyclopaedia Britannica and Merriam-Webster sued Perplexity over scraping. Studies detail how artificial intelligence mini-clips accelerated Taiwan’s recall propaganda on Facebook and Instagram, while Euronews describes Russia’s artificial intelligence-powered playbook in Moldova, from spoofed media sites to comment bots. Axios reports Meta launched a super PAC to oppose state-level artificial intelligence rules and will soon use users’ Meta AI chat interactions for ad targeting, raising fresh privacy questions. A new guide by Indicator maps platform labelling and watermarking, noting that current detection and provenance measures are fragile and fragmented under the emerging EU AI Act regime.
The Climate Clarity Corner highlights intensified climate disinformation and greenwashing: Petrobras’ influencer-led rebrand ahead of COP30, industry-backed claims in a US climate report cited to weaken regulation, and familiar narratives in Australia that dismiss risk assessments as alarmism. The European Commission’s plan to drop the Green Claims Directive, under political pressure, alarms transparency advocates who warn of weakened accountability. EU DisinfoLab also points to new resources and events, including IPIE’s expert survey showing deepening pessimism about the information environment and training to verify climate claims.
EU DisinfoLab’s latest releases include an updated Impact-Risk Index that reflects advances in artificial intelligence and coordination techniques, an automated Impact Calculator for standardized assessments, and the launch of a Conflict and Crisis Hub that curates research and tools across war and emergency contexts. Upcoming webinars cover the TRIED benchmark for artificial intelligence detection tools and a deep dive into Russia’s ANO Dialogue as a centralized information control apparatus, with recent sessions unpacking how Operation Overload is evolving with artificial intelligence. Registration remains open, with limited spots, for the annual conference in Ljubljana on 15 to 16 October.