Why artificial intelligence’s role in the circular economy is contested

New research from Cambridge Judge Business School argues that circularity in artificial intelligence ecosystems is shaped by deep disagreements over purpose, strategy and governance rather than purely technical considerations.

Research co-authored by Shahzad (Shaz) Ansari of Cambridge Judge Business School argues that the relationship between artificial intelligence and circular economy goals is far more complex than public support for everyday recycling suggests. Building on Ansari’s prior work on framing, the study contends that circularity in the context of artificial intelligence is not just a technical challenge but an interpretive struggle over what artificial intelligence is, what circularity should mean and who gets to decide. Different actors frame artificial intelligence alternately as a solution to environmental problems or as a contributor to them, and these competing interpretations shape how circular economy objectives are defined, pursued and governed across artificial intelligence ecosystems.

The paper, published in Long Range Planning, identifies 3 core tensions that structure debates about artificial intelligence and circularity: purpose, strategy and governance. The purpose tension asks whether artificial intelligence is solving climate change or fueling it, with optimists highlighting climate breakthroughs and critics pointing to the huge amount of energy used in large language models and the resulting greenhouse gas emissions. The strategy tension contrasts incremental efficiency improvements in existing business and technology processes with demands for systemic transformation of hardware and business models. The governance tension focuses on who controls artificial intelligence sustainability outcomes, pitting internal control by a small number of powerful technology firms against broader sovereignty that involves public actors and external oversight. The study positions these 3 tensions as mechanisms that determine how coordination around circularity occurs in business ecosystems of loosely related yet interdependent actors.

Drawing on policy documents, interviews, industry reports and public statements from circularity and artificial intelligence ecosystem participants between 2003 and 2005, the authors track how techno-solutionist and techno-scepticist framings clash and how some organisations attempt reconciliatory “green artificial intelligence” framing. They show how orchestrators such as large artificial intelligence firms and other actors selectively emphasise certain aspects of artificial intelligence circularity while downplaying others, using framing contests to influence shared meaning, mobilise support and shape which definitions of circular artificial intelligence prevail. The research concludes that circularity becomes a site of contestation rather than a straightforward technical goal, and that without substantive changes, reconciliatory framing risks drifting into symbolic action. For managers, the authors recommend dual framing that treats circularity and innovation as interdependent imperatives, supported by granular performance indicators such as energy efficiency per artificial intelligence model, while policymakers are urged to create standards and forums that foster coordinated yet innovative approaches. Ultimately, the study argues that circularity operates as an internal force whose meaning is continually reshaped through these framing dynamics, influencing new roles, governance structures and value propositions in artificial intelligence ecosystems.

50

Impact Score

Technology risk trends and priorities for 2026

Grant Thornton outlines how boards and internal audit teams should prepare for escalating technology risks in 2026, from cyber attacks and cloud complexity to Artificial Intelligence regulation and deepfake-enabled fraud. The analysis stresses that resilience, governance, and proactive assurance are now strategic imperatives, not optional enhancements.

What happens when artificial intelligence agents work together in financial decisions

Researchers at Featurespace’s innovation lab studied how teams of artificial intelligence agents behave when jointly assessing income and credit risk, finding that collaboration can unpredictably amplify or reduce bias. Their work highlights the need to test multi-agent systems as a whole, particularly in high-stakes financial use cases like fraud detection and lending.

Reducing online harms through radical platform transparency

Carolina Are argues that piecemeal laws and youth bans will not fix online harms, and that only radical transparency into social media business models and decision making can meaningfully challenge Big Tech power. She also warns that Europe’s ambiguous dependence on United States technology and Artificial Intelligence firms risks entrenching a technoimperialist status quo.

LangChain agents: tooling, middleware, and structured output

LangChain’s agent system combines language models, tools, and middleware to iteratively solve tasks, with support for dynamic models, tools, prompts, and structured output. The docs detail how to configure models, manage state, and extend behavior for production-ready Artificial Intelligence agents.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.