The age of agentic artificial intelligence: building trust and transparency

Dr Richard Dune analyses the backlash to a Guardian-AWS piece on agentic artificial intelligence and argues that public scepticism shows trust cannot be marketed; it must be earned through independent governance, clear limits, and meaningful human oversight.

The Guardian-AWS article on agentic artificial intelligence presented autonomy as the next frontier, promising automated workflows, reduced human error, and sector efficiencies. It proposed a familiar triad of mitigations: security-first design with zero-trust architectures and real-time monitoring, human oversight through human-in-the-loop and human-on-the-loop frameworks, and transparency-by-design that explains limitations, data pathways, and decision boundaries. The piece framed good governance as a competitive advantage for organisations that balance innovation with responsibility.

What followed was a rapid and hostile public reaction. Readers, encountering branded content paid for by a major cloud provider, labelled the piece advertorial, propaganda, or even “gaslighting.” The response exposed a credibility gap: when the same corporations that develop and profit from artificial intelligence also fund narratives about its safety, assurances are read as image management rather than independent accountability. Critics also challenged the language of “agentic” artificial intelligence, arguing it misleadingly implies intention; the article notes that today’s large language models (LLMs) and automated systems operate within pre-defined scaffolds, and the illusion of autonomy undermines trust when overstated.

The controversy surfaces practical lessons for regulators and practitioners, especially in health, social care, and education. First, governance must be independent rather than performative, with external scrutiny, ethical oversight, and regulatory alignment. Second, transparency must include candid limits, failure modes, and who remains accountable when systems err. Third, human-centred design is non-negotiable: meaningful human oversight requires trained people with authority to review and reverse decisions. The blog draws a parallel with CQC-style accountability and highlights ComplyPlus as an example of embedding traceable audit trails and human verification in compliance systems.

The takeaway is clear. Trust in artificial intelligence will not be secured through sponsored narratives. It must be rebuilt through demonstrable, participatory, and accountable governance that aligns technology with organisational values and lived experience. Until organisations confront that contradiction honestly, public scepticism will persist and artificial intelligence will be seen as a system of control rather than a tool for improving services.

52

Impact Score

Cloud and data center spending accelerates artificial intelligence expansion

Cloud providers, chipmakers, and enterprises are escalating multi-billion dollar investments to build out artificial intelligence and cloud infrastructure across key global markets. Strategic deals and partnerships are reshaping data center footprints, sovereign cloud offerings, and access to high-performance compute.

Global regulatory trends on the use of generative artificial intelligence

Governments in the EU, Japan, the United States, and the United Kingdom are moving quickly to regulate generative artificial intelligence, using a mix of binding laws, guidelines, and standards. Diverging philosophies and timelines are making cross-border compliance planning increasingly complex for companies.

Perplexity launches Computer to orchestrate many Artificial Intelligence models

Perplexity is rolling out Computer, a cloud-based agent that coordinates 19 Artificial Intelligence models for complex workflows, as it pivots toward high-value enterprise users and deep research. The launch underscores a broader bet on multi-model orchestration, custom benchmarks and a boutique business strategy over mass adoption.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.