Police Use New AI Tool to Bypass Facial Recognition Bans

Police departments are adopting Artificial Intelligence systems that track people using non-biometric attributes, skirting facial recognition bans and raising new concerns over surveillance.

Police departments and federal agencies in the United States are increasingly leveraging a new Artificial Intelligence surveillance tool that identifies individuals not by their faces, but by physical attributes such as body size, gender, hair color and style, clothing, and accessories. This approach enables law enforcement to sidestep the growing number of legal restrictions on facial recognition technology. According to advocates from the ACLU, this development marks the first widespread use of such tracking systems in the US, with significant potential for abuse, particularly in the context of heightened calls for monitoring protesters, immigrants, and students.

The rapid adoption of Artificial Intelligence within policing is driven by more than 18,000 independent police departments across the country, who enjoy significant autonomy in determining which technologies to acquire. Companies like Flock and Axon provide comprehensive sensor suites—ranging from cameras to gunshot detectors and drones—alongside Artificial Intelligence tools that analyze the large volumes of data these devices collect. Police departments cite improved efficiency, reduced officer workload, and faster response times as primary motivators for investing in these technologies. However, these advancements present complex challenges related to transparency, oversight, and trust between law enforcement and the communities they serve.

Controversies have already emerged over police deployment of Artificial Intelligence-driven devices, such as drones in Chula Vista, California, which, though occasionally successful in emergencies, have sparked lawsuits and privacy concerns, especially among residents in poorer neighborhoods. Critics, including the ACLU´s Jay Stanley, highlight the inadequacy of current regulations, noting the absence of federal oversight and a tendency for departments to implement tools first and seek community feedback later, if at all. Crucially, these new tracking systems, by avoiding biometric data, often elude existing scrutiny and restrictions. Calls are mounting for public hearings, independent audits, and clear usage guidelines before further adoption of Artificial Intelligence in policing proceeds, as the pace of technological change increasingly outstrips the development of meaningful policy safeguards.

82

Impact Score

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.