Growing use of Artificial Intelligence generated and edited content by powerful institutions is exposing a deeper truth crisis than anticipated, in which establishing factual accuracy does not automatically restore public trust. The Department of Homeland Security has been confirmed to be using Artificial Intelligence video generators from Google and Adobe to produce content for public communication, including in the context of immigration messaging that supports President Trump’s mass deportation agenda. At the same time, a digitally altered White House photo of a woman arrested at an ICE protest, which made her appear hysterical and in tears, circulated widely on X, and officials declined to answer whether it was intentionally manipulated, reflecting a willingness inside government to blur the line between documentation and propaganda.
Public reaction has tended to conflate such government manipulation with separate cases in which news outlets mishandled altered imagery, feeding a narrative that “everyone does it” and that truth no longer matters. In one prominent example, the news network MS Now (formerly MSNBC) aired an Artificial Intelligence edited image of Alex Pretti that made him appear more handsome, a mistake that spawned viral clips, including one from Joe Rogan’s podcast, before the outlet told fact-checkers it had not known the image was altered. Unlike the White House incident, this case involved a failure of verification and partial disclosure after the fact rather than deliberate obfuscation, yet both episodes are perceived as equivalent, which further erodes confidence in traditional gatekeepers.
Efforts to counter the Artificial Intelligence truth crisis have centered on tools meant to authenticate content, but these are faltering in practice and concept. There was plenty of hype in 2024 about the Content Authenticity Initiative, cofounded by Adobe and adopted by major tech companies, which would attach labels to content disclosing when it was made, by whom, and whether Artificial Intelligence was involved, yet Adobe applies automatic labels only when the content is wholly Artificial Intelligence generated and otherwise makes labels opt in. Platforms such as X can strip or hide such labels, and the Pentagon-linked DVIDS site does not visibly display them despite earlier assurances. Meanwhile, a study in Communications Psychology found that participants who watched a deepfake confession to a crime relied on it when judging guilt even after being explicitly told it was fake, indicating that emotional influence persists after exposure. As disinformation experts argue that “transparency helps, but it isn’t enough on its own,” increasingly advanced, cheap, and accessible Artificial Intelligence tools are enabling a world where influence survives exposure, doubt is weaponized, and defenders of truth struggle to keep pace.
