Carolina Are contends that mounting concern over social media harms has pushed governments, particularly in Australia, the UK and Europe, toward highly visible but ineffective measures such as social media bans for under sixteens and ad hoc content restrictions. She highlights expert criticism of Australia’s ban, which suggests that prohibition risks driving teenagers’ social media use into less visible and less regulatable spaces, while allowing platforms to avoid responsibility for creating safe environments. For Are, this kind of reactive regulation does little to address the deeper structures of power underpinning Big Tech, especially when those platforms are tightly bound up with United States economic and political interests and shielded by a narrative that casts them as neutral communications utilities rather than profit-driven media companies.
Using FOSTA/SESTA as a case study, Are shows how targeted laws can backfire and widen censorship. She explains that Section 230 of the United States Telecommunications Act originally protected platforms from civil liability by treating them as intermediaries, but that the FOSTA and SESTA package carved out an exception for content facilitating sex trafficking and sex work. In her research, these changes led to widespread suppression of sex work, sex education, reproductive health, activism and LGBTQIA+ content, with serious financial and psychological harm to creators who depend on online spaces to earn a living. At the same time, she notes that investigations show content by individual sex workers is heavily moderated, while Artificial Intelligence-made pornography tends to be moderated less harshly, creating a sex-negative digital environment where misogynistic material flourishes and educational counter-narratives are sidelined. This pattern, she argues, undercuts the stated rationale of such laws and diverts attention from the structural incentives of Big Tech’s growth-only business model.
Are warns that current regulatory efforts in the UK and European Union, such as the Digital Services Act and the Online Safety Act, focus on penalties and limited transparency measures without confronting how opaque, profit-maximizing design choices determine what content goes viral, what is censored and how harms are handled. She calls for radical transparency into the back-room decisions, enforcement practices, addictive design strategies and biased systems that shape user experience, arguing that without such visibility regulators will continue to chase red herrings. These concerns are sharpened when platform profits intersect with far-right, pro-deregulation politics, exemplified by Elon Musk’s interventions in UK and European debates, the role of Grok on X, and the launch of the Artificial Intelligence-powered Grokipedia. Against a backdrop where laws and technologies “made in the USA” radiate globally and even social media posts can influence border decisions, Are criticizes Europe’s attempt to both court Big Tech investment and regulate harms, including through partnerships with United States Artificial Intelligence firms even as they spread misinformation. She frames this as part of a broader technoimperialism that treats regulation as a national security threat and warns that dilution of safeguards in measures like the EU Artificial Intelligence Act risks deeper dependence on United States-made systems. As an alternative, she urges Europe to unite around radical transparency, scrutinize platforms’ business models and reassess the “special relationship” to build evidence-based regulation that protects democracies and communities rather than weakening them.
