In response to a spike in fatal drug overdoses in 2021, Maryland’s Department of Health and the state police turned to federal scientific expertise to better understand shifts in the local drug supply. Partnering with scientists at the National Institute of Standards and Technology (NIST), officials tapped into advanced methods for detecting trace drugs, explosives, and hazardous materials. Ed Sisco and his team at NIST developed sensitive detection techniques that offered Maryland authorities crucial insights into emerging synthetic substances, enabling rapid identification and potentially saving lives in the ongoing opioid crisis.
Simultaneously, the US military is piloting generative Artificial Intelligence to streamline intelligence analysis and threat detection. In a novel deployment last year, US Marines trained across the Pacific using chatbot-style interfaces to process surveillance data. This marked a milestone in the Pentagon’s push to embed generative Artificial Intelligence in operational processes, especially for analyzing complex intelligence in high-stakes scenarios. Despite the promise of more efficient processing, these efforts raise significant concerns from Artificial Intelligence safety experts, who worry that large language models may struggle to interpret subtle context and nuance—a crucial capability in sensitive military contexts.
Beyond these federal initiatives, the newsletter highlights ongoing debates around international tech alliances—such as US pressure on European partners to choose American satellite solutions over Chinese ones—and broader industry trends. Notably, Nvidia announced plans to base its Artificial Intelligence supercomputer manufacturing in the US, Meta entered antitrust litigation, and OpenAI released models tailored for advanced coding tasks. These developments underscore the rapid evolution of technology across public health, defense, and broader industry, as well as the increasing scrutiny on Artificial Intelligence’s capabilities and social impact.