The age of agentic artificial intelligence: building trust and transparency

Dr Richard Dune analyses the backlash to a Guardian-AWS piece on agentic artificial intelligence and argues that public scepticism shows trust cannot be marketed; it must be earned through independent governance, clear limits, and meaningful human oversight.

The Guardian-AWS article on agentic artificial intelligence presented autonomy as the next frontier, promising automated workflows, reduced human error, and sector efficiencies. It proposed a familiar triad of mitigations: security-first design with zero-trust architectures and real-time monitoring, human oversight through human-in-the-loop and human-on-the-loop frameworks, and transparency-by-design that explains limitations, data pathways, and decision boundaries. The piece framed good governance as a competitive advantage for organisations that balance innovation with responsibility.

What followed was a rapid and hostile public reaction. Readers, encountering branded content paid for by a major cloud provider, labelled the piece advertorial, propaganda, or even “gaslighting.” The response exposed a credibility gap: when the same corporations that develop and profit from artificial intelligence also fund narratives about its safety, assurances are read as image management rather than independent accountability. Critics also challenged the language of “agentic” artificial intelligence, arguing it misleadingly implies intention; the article notes that today’s large language models (LLMs) and automated systems operate within pre-defined scaffolds, and the illusion of autonomy undermines trust when overstated.

The controversy surfaces practical lessons for regulators and practitioners, especially in health, social care, and education. First, governance must be independent rather than performative, with external scrutiny, ethical oversight, and regulatory alignment. Second, transparency must include candid limits, failure modes, and who remains accountable when systems err. Third, human-centred design is non-negotiable: meaningful human oversight requires trained people with authority to review and reverse decisions. The blog draws a parallel with CQC-style accountability and highlights ComplyPlus as an example of embedding traceable audit trails and human verification in compliance systems.

The takeaway is clear. Trust in artificial intelligence will not be secured through sponsored narratives. It must be rebuilt through demonstrable, participatory, and accountable governance that aligns technology with organisational values and lived experience. Until organisations confront that contradiction honestly, public scepticism will persist and artificial intelligence will be seen as a system of control rather than a tool for improving services.

52

Impact Score

New prompt injection papers: agents rule of two and the attacker moves second

Two recent papers examine prompt injection risks and defenses: Meta Artificial Intelligence’s Agents Rule of Two proposes limiting agent capabilities to reduce high-impact attacks, while a large arXiv study shows adaptive attacks can bypass most published jailbreak and prompt injection defenses.

Tesla vows yearly breakthroughs in Artificial Intelligence chips

Tesla chief Elon Musk said the company will deliver a new Artificial Intelligence chip design to volume production every 12 months and aims to outproduce rivals in unit volumes. Analysts warn scaling annual launches and matching established ecosystems will be a substantial operational challenge.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.