The age of agentic artificial intelligence: building trust and transparency

Dr Richard Dune analyses the backlash to a Guardian-AWS piece on agentic artificial intelligence and argues that public scepticism shows trust cannot be marketed; it must be earned through independent governance, clear limits, and meaningful human oversight.

The Guardian-AWS article on agentic artificial intelligence presented autonomy as the next frontier, promising automated workflows, reduced human error, and sector efficiencies. It proposed a familiar triad of mitigations: security-first design with zero-trust architectures and real-time monitoring, human oversight through human-in-the-loop and human-on-the-loop frameworks, and transparency-by-design that explains limitations, data pathways, and decision boundaries. The piece framed good governance as a competitive advantage for organisations that balance innovation with responsibility.

What followed was a rapid and hostile public reaction. Readers, encountering branded content paid for by a major cloud provider, labelled the piece advertorial, propaganda, or even “gaslighting.” The response exposed a credibility gap: when the same corporations that develop and profit from artificial intelligence also fund narratives about its safety, assurances are read as image management rather than independent accountability. Critics also challenged the language of “agentic” artificial intelligence, arguing it misleadingly implies intention; the article notes that today’s large language models (LLMs) and automated systems operate within pre-defined scaffolds, and the illusion of autonomy undermines trust when overstated.

The controversy surfaces practical lessons for regulators and practitioners, especially in health, social care, and education. First, governance must be independent rather than performative, with external scrutiny, ethical oversight, and regulatory alignment. Second, transparency must include candid limits, failure modes, and who remains accountable when systems err. Third, human-centred design is non-negotiable: meaningful human oversight requires trained people with authority to review and reverse decisions. The blog draws a parallel with CQC-style accountability and highlights ComplyPlus as an example of embedding traceable audit trails and human verification in compliance systems.

The takeaway is clear. Trust in artificial intelligence will not be secured through sponsored narratives. It must be rebuilt through demonstrable, participatory, and accountable governance that aligns technology with organisational values and lived experience. Until organisations confront that contradiction honestly, public scepticism will persist and artificial intelligence will be seen as a system of control rather than a tool for improving services.

52

Impact Score

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.