The proliferation of artificial intelligence technologies across sectors prompts renewed scrutiny of how these systems might enable or amplify violations of competition law within the European Union. The EU AI Act underscores two disruptive attributes: artificial intelligence systems act with varying autonomy and generate outputs—influencing both digital and physical environments—through inference based on input data. This complexity challenges regulators and legal experts interpreting traditional doctrines under evolving circumstances.
Central to enforcement under article 102 of the Treaty on the Functioning of the European Union is the definition of market dominance, which requires rigorous market delineation. In artificial intelligence domains, defining relevant markets is especially fraught: the industry’s reliance on a patchwork of technologies, wide-ranging end uses, and global race for supremacy among the United States, Europe, and China makes dominance assessments fluid. High-profile investment surges by U.S. tech giants have triggered antitrust concerns, yet recent events—such as DeepSeek’s swift market entry—demonstrate artificial intelligence markets remain dynamic and susceptible to disruption, complicating assumptions about entrenched market power.
The article outlines several specific abuse scenarios where dominant firms may leverage artificial intelligence to reinforce their positions. These include self-preferencing (embedding preferences for their own products or using non-public data to disadvantage rivals), engaging in predatory pricing or real-time price discrimination, and controlling essential inputs like specialized chips or datasets. Exclusive rebates, discriminatory pricing, refusal to supply critical hardware, or denying access to valuable data all amplify concerns of foreclosure. Tying and bundling practices—with artificial intelligence tools linked to core software suites either contractually or technically—further catalyze worry among regulators, as exemplified by ongoing proceedings against Microsoft regarding Teams integration with productivity offerings. The EU’s approach also emphasizes the high burden of proof for establishing exploitative abuses, especially around personalized pricing enabled by artificial intelligence. Regulators demand evidence not only of market exclusion, but systematic harm to competition or consumers without objective justification.
The landscape described is one of heightened vigilance and evolving legal theory, where policymakers must adapt existing frameworks to address the unique challenges artificial intelligence poses. Ongoing investigations and regulatory responses will likely shape how companies structure their operations, access to technology, and data governance in coming years.