Texas voters heading into the 2026 primary are encountering a campaign environment increasingly shaped by artificial intelligence, as candidates deploy synthetic images and videos to mock opponents, dramatize policy attacks and energize their bases. A bill that would have required disclosures in political ads that use artificial intelligence or substantially alter real images passed the Texas House but stalled in the Senate and never became law, leaving the use of artificial intelligence in political advertising as a largely unregulated free-for-all. Experts warn that artificial intelligence is accelerating long-standing trends in political disinformation, adding a faster and more accessible tool to an ecosystem that already includes cheapfakes, out-of-context clips and edited media.
Several high-profile Republicans and Democrats are testing how far they can push artificial intelligence in their messaging. Attorney General Ken Paxton, who is running for U.S. Senate, shared a video using some artificial intelligence that shows Sen. John Cornyn dancing with Democratic U.S. Rep. Jasmine Crockett, accompanied by a caption accusing Cornyn of “dancing late into the night with liberal lunatics” and selling out voters. The video looks clearly animated, with unnatural movements and blurred, faceless figures, and Paxton included an artificial intelligence disclaimer at the end. Down-ballot, GOP state House candidate Kat Wall released a satirical YouTube ad against Rep. Angelia Orr that uses deepfaked versions of Vladimir Putin and Xi Jinping, synthetic voice clones and manipulated visuals of Orr, ending with the narrator stating: “This ad is a parody using AI video tools”. Wall’s campaign argues it uses safeguards such as clear disclosures and documentation to support its points, but fact-checkers like Angie Holan of the International Fact-Checking Network caution that many viewers still mistake such parodies for reality.
The line between obvious satire and more convincing synthetic media is already blurring. On Dec. 11th, 2025, U.S. Rep. Jasmine Crockett posted an artificial intelligence video on Facebook featuring herself as a baby accusing Baby Trump of rigging Texas elections, a cartoonish piece labeled on YouTube as “Altered or synthetic content”. By contrast, Sen. John Cornyn’s Jan. 21 artificial intelligence attack on Republican attorney general candidate Wesley Hunt as a “show dog” lacked any disclosure, and specialized detection tools such as Hive Moderation and Google’s artificial intelligence chatbot, Gemini, indicated that there is a 99% probability the video was generated using artificial intelligence, pointing to tells like blurred signage and unnatural movement. Crockett also faced criticism after an ad titled “Texans don’t back down. We rise.” appeared to use a Google artificial intelligence generated crowd scene around her, with blurred faces and indistinct body outlines, although her campaign praised the spot as the product of “hundreds of hours of real craft” and did not confirm artificial intelligence use.
Beyond video, Cornyn has leaned into artificial intelligence imagery to cast Democratic rivals as horror-movie villains. In a series of artificial intelligence generated images and videos, U.S. Senate candidate James Talarico is transformed into “Taxula,” a fanged caricature clutching a “Tax Bill” with distorted fingers, while Beto O’Rourke appears as “Franken-Beto,” a stitched-together monster powered by “California Mandates” and “Chaos” and former U.S. Rep. Colin Allred is depicted as a green-skinned witch stirring a “Bidenomics” cauldron in front of a stormy Texas Capitol. Fact-checkers like Katie Sanders of PolitiFact and Holan argue that the rapid improvement of artificial intelligence risks making people so skeptical that they no longer trust anything they see and urge voters to be more intentional about sourcing, asking whether content comes from official campaigns or has been verified by news organizations before sharing items that seem designed to provoke outrage.
