President Donald Trump has signed an executive order stating that companies seeking US government contracts must guarantee their artificial intelligence systems are ´free from ideological bias.´ This move, formalized through both executive action and a new administration policy document, sets forth requirements that many experts argue could enable the government to dictate its own worldview to technology firms—while presenting technical and philosophical conundrums about what constitutes objectivity in artificial intelligence.
According to the administration’s ´AI Action Plan,´ all future federal contracts for artificial intelligence must be awarded only to developers whose systems are ´objective´ and devoid of ´top-down ideological bias.´ The plan further instructs federal agencies, including the National Institute of Standards and Technology, to remove references to misinformation, diversity, equity, inclusion, and climate change from their guidelines. These measures build on the Trump administration’s ongoing rollback of research into misinformation, diversity initiatives, and climate science within US government institutions.
Prominent tech companies such as Amazon, Google, Microsoft, and Meta, all of which have previously provided artificial intelligence solutions to the federal government, are now faced with the challenge of forcibly aligning their models with the government’s new standards. Researchers say the task is all but impossible: large language models inherently absorb and reflect the biases present in their training data, which often lean toward US liberal stances on issues like gender and equality, independent of developers’ explicit intentions. Attempts to address this through prompt engineering or output filtering are piecemeal and cannot fundamentally shift the implicit leanings of a model. Furthermore, any effort to realign commercial artificial intelligence tools with one administration’s preferred worldview risks alienating global users and customers, making industry-wide compliance not only ethically fraught but also commercially risky.
Experts like Becca Branum of the Center for Democracy & Technology and Paul Röttger of Bocconi University argue that Trump’s order paradoxically imposes a government-driven ideology while demanding that systems remain objective. This sets up an environment ripe for political influence and arbitrary enforcement. Other researchers, including Jillian Fisher at the University of Washington, note that true political neutrality in artificial intelligence is functionally unattainable, given extensive human decision-making throughout model development. The upshot: the Trump administration’s strategy to eradicate ´woke´ artificial intelligence creates a problem with no clear technical, ethical, or operational solution.