How journalists can identify and mitigate artificial intelligence bias

Artificial Intelligence systems inherit real-world prejudices, posing risks for journalism. Experts and newsrooms reveal strategies to counter these biases.

Since the debut of ChatGPT in 2022, newsrooms have grappled with the integration of artificial intelligence, balancing experimentation and ethical concerns—especially around bias. WAN-IFRA´s 2025 survey shows that nearly half of news organizations are now utilizing artificial intelligence, though ethical anxieties persist. Ramaa Sharma, engaging with journalists, technologists, and academics, gathers diverse perspectives on how media institutions are confronting generative artificial intelligence and its inherent risks.

The article clarifies that artificial intelligence bias isn´t an accidental flaw but an intrinsic outcome of models trained on existing data, largely sourced from Western, English-speaking, and privileged institutions. These biases, whether statistical, cognitive, or social, become embedded at various stages of artificial intelligence system development—data collection, modeling, or deployment—often amplifying existing inequalities. Notable incidents highlight the risks: wrongful arrests via facial recognition in Detroit, racial bias in organ transplant algorithms, and errors in pension allocation in India. Researcher Peter Slattery and MIT´s AI Risk Repository underscore that bias risks are more pronounced in the Global South, where issues are underreported or misunderstood.

Confronting bias is complex. Technical workshops suggest proactive monitoring—adding metadata at every artificial intelligence production stage—but implementing such processes can be challenging for resource-strapped newsrooms. They must contend with both unconscious biases, like confirmation or stereotyping biases, and deliberate manipulations, such as prompt injections or dataset poisoning. The harmful impact of unrepresentative or outdated data is evident in language and gender misrecognition cases, prompting outlets like NPO, SVT, and Bayerischer Rundfunk to develop their own language models or targeted projects to improve inclusivity and accuracy. At Reuters, mitigating gender bias means ongoing tool comparison, custom model building, and post-processing checks.

Bias doesn´t end with data. The ‘bias of the median’—where language models default to mainstream perspectives—marginalizes innovation and minority groups. Tools like Hugging Face´s Civics and Shades aim to evaluate and highlight such issues. Editorial content decisions, especially through recommender systems, require constant monitoring; unchecked, these algorithms may increasingly narrow audiences´ exposure. News organizations like the Financial Times embed regular fairness, diversity, and misuse checks, while deliberate bias threats force media to appoint dedicated accountability roles and foster deliberative, interdisciplinary oversight panels.

Artificial intelligence also holds promise to increase newsroom self-awareness and inclusion. Initiatives at the University of Florida analyze sentiment in news language, revealing subtle framing biases. Dutch broadcasters are designing ‘digital twin’ personas to test content impact on underrepresented audiences, feeding feedback directly into editorial workflows. While eliminating bias is likely unattainable, the collective response from technologists, journalists, and social scientists emphasizes that interdisciplinary collaboration, conscious design, and ongoing scrutiny are vital. As journalism confronts a crisis of public trust, embedding fairness at every level of artificial intelligence deployment is crucial, not optional, to guard against compounding societal harm.

75

Impact Score

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.