Antitrust enforcement shifts reshape technology and artificial intelligence dealmaking in 2025

Regulators in the US, UK, and EU recalibrated antitrust tools for digital and artificial intelligence markets in 2025, tightening platform and labour oversight while taking a more pragmatic line on many vertical and innovation-driven deals.

Across 2025, antitrust enforcement in major jurisdictions became more explicitly shaped by political and industrial policy goals, with direct consequences for technology and artificial intelligence companies. In the UK, merger control underwent a pronounced recalibration after a change in Competition and Markets Authority leadership and a pro-growth policy steer from government, resulting in a year in which the CMA blocked no mergers and cleared only six with remedies, compared to seven in 2024 and 12 in 2023. Procedural reforms introduced a 40-working-day prenotification KPI and tighter limits on the share-of-supply test, while further changes to Phase 2 decision-making are expected in 2026. In parallel, the European Commission launched a review of its Horizontal and Non-Horizontal Merger Guidelines, last updated in 2004 and 2007, signalling potential new rebuttable presumptions, a more nuanced treatment of efficiencies linked to sustainability and security-of-supply, and a sharpened focus on strategic sectors such as artificial intelligence, cloud, and compute-intensive services.

US antitrust agencies highlighted a shift toward treating merger enforcement as traditional law enforcement rather than ongoing regulation, coupling continued conduct litigation against large platforms with more restraint on challenging transactions in artificial intelligence and other innovation-driven markets. The FTC’s January 2025 staff report placed artificial intelligence partnerships firmly on the enforcement agenda by outlining concerns about dependency, lock-in, and access to sensitive information, but dissents by Commissioners Ferguson and Holyoak urged caution against “charging headlong to regulate AI” and advocated a “first, do no harm” stance that avoids using antitrust as a proxy for artificial intelligence regulation. That approach was reflected in the handling of major vertical and artificial intelligence-related deals, including Google’s proposed acquisition of Wiz, Salesforce’s acquisition of Informatica, and Meta’s investment in Scale AI, which drew scrutiny but did not result in US challenges, reinforcing a pattern in which “scrutiny did not become litigation” absent conventional theories of harm.

Digital markets and gatekeeper regulation moved into a more assertive enforcement phase in Europe and the UK. The EU’s Digital Markets Act generated its first fines in April, with €500 million imposed on Apple for steering practices under Article 5(4) DMA and €200 million on Meta for its “consent or pay” model under Article 5(2) DMA, while the Commission launched a tender to study generative artificial intelligence ahead of the DMA’s 2026 review and signalled possible extension of the regime to additional services. In the UK, the Digital Markets, Competition and Consumers Act took effect on 1 January 2025, enabling tailored conduct rules and pro-competitive interventions for firms designated with Strategic Market Status, with Google and Apple already designated for key services and noncompliance carrying fines of up to 10% of global turnover. At the same time, courts and enforcers tested new boundaries in platform and labour cases: Judge Mehta’s search remedies order imposed a six year behavioural and data access regime on Google rather than structural divestitures, the DOJ and states won a publisher-side ad tech liability ruling against Google, the FTC’s monopolisation case against Meta failed after the court rejected “personal social networking services” as a current market, and European and UK authorities issued their first significant labour market cartel fines, including €329 million on Delivery Hero and Glovo and £4.2 million on media companies for collusion on freelance pay. Collectively, these developments point to 2026 as a year of sustained scrutiny of big technology and artificial intelligence, but with greater emphasis on targeted theories of harm, realistic remedies, and sensitivity to growth and innovation.

70

Impact Score

Large language models reshape radiology reporting and workflows in Latin America

Large language models are simplifying radiology reports and supporting clinical workflows across Latin America, while providers and regulators work to balance accuracy, trust, and safety. Eden’s deployment at scale illustrates both the operational benefits and the regulatory and cultural hurdles of Artificial Intelligence integration in healthcare.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.