Reducing online harms through radical platform transparency

Carolina Are argues that piecemeal laws and youth bans will not fix online harms, and that only radical transparency into social media business models and decision making can meaningfully challenge Big Tech power. She also warns that Europe’s ambiguous dependence on United States technology and Artificial Intelligence firms risks entrenching a technoimperialist status quo.

Carolina Are contends that mounting concern over social media harms has pushed governments, particularly in Australia, the UK and Europe, toward highly visible but ineffective measures such as social media bans for under sixteens and ad hoc content restrictions. She highlights expert criticism of Australia’s ban, which suggests that prohibition risks driving teenagers’ social media use into less visible and less regulatable spaces, while allowing platforms to avoid responsibility for creating safe environments. For Are, this kind of reactive regulation does little to address the deeper structures of power underpinning Big Tech, especially when those platforms are tightly bound up with United States economic and political interests and shielded by a narrative that casts them as neutral communications utilities rather than profit-driven media companies.

Using FOSTA/SESTA as a case study, Are shows how targeted laws can backfire and widen censorship. She explains that Section 230 of the United States Telecommunications Act originally protected platforms from civil liability by treating them as intermediaries, but that the FOSTA and SESTA package carved out an exception for content facilitating sex trafficking and sex work. In her research, these changes led to widespread suppression of sex work, sex education, reproductive health, activism and LGBTQIA+ content, with serious financial and psychological harm to creators who depend on online spaces to earn a living. At the same time, she notes that investigations show content by individual sex workers is heavily moderated, while Artificial Intelligence-made pornography tends to be moderated less harshly, creating a sex-negative digital environment where misogynistic material flourishes and educational counter-narratives are sidelined. This pattern, she argues, undercuts the stated rationale of such laws and diverts attention from the structural incentives of Big Tech’s growth-only business model.

Are warns that current regulatory efforts in the UK and European Union, such as the Digital Services Act and the Online Safety Act, focus on penalties and limited transparency measures without confronting how opaque, profit-maximizing design choices determine what content goes viral, what is censored and how harms are handled. She calls for radical transparency into the back-room decisions, enforcement practices, addictive design strategies and biased systems that shape user experience, arguing that without such visibility regulators will continue to chase red herrings. These concerns are sharpened when platform profits intersect with far-right, pro-deregulation politics, exemplified by Elon Musk’s interventions in UK and European debates, the role of Grok on X, and the launch of the Artificial Intelligence-powered Grokipedia. Against a backdrop where laws and technologies “made in the USA” radiate globally and even social media posts can influence border decisions, Are criticizes Europe’s attempt to both court Big Tech investment and regulate harms, including through partnerships with United States Artificial Intelligence firms even as they spread misinformation. She frames this as part of a broader technoimperialism that treats regulation as a national security threat and warns that dilution of safeguards in measures like the EU Artificial Intelligence Act risks deeper dependence on United States-made systems. As an alternative, she urges Europe to unite around radical transparency, scrutinize platforms’ business models and reassess the “special relationship” to build evidence-based regulation that protects democracies and communities rather than weakening them.

52

Impact Score

What happens when artificial intelligence agents work together in financial decisions

Researchers at Featurespace’s innovation lab studied how teams of artificial intelligence agents behave when jointly assessing income and credit risk, finding that collaboration can unpredictably amplify or reduce bias. Their work highlights the need to test multi-agent systems as a whole, particularly in high-stakes financial use cases like fraud detection and lending.

LangChain agents: tooling, middleware, and structured output

LangChain’s agent system combines language models, tools, and middleware to iteratively solve tasks, with support for dynamic models, tools, prompts, and structured output. The docs detail how to configure models, manage state, and extend behavior for production-ready Artificial Intelligence agents.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.