Construction remains one of the most hazardous industries in the United States, with over 1,000 fatalities annually, often due to preventable accidents such as falls. Despite public commitments to safety, practical realities on job sites can encourage shortcuts that put workers at risk. Companies like DroneDeploy are leveraging recent advances in generative Artificial Intelligence to bridge the gap between proclaimed safety priorities and actual workplace practices. Their tool, Safety AI, introduced in late 2024, analyzes daily reality capture imagery from active construction sites, flagging Occupational Safety and Health Administration (OSHA) violations with a claimed accuracy of 95%. This system is already active across hundreds of U.S. sites and is being rolled out in diverse regulatory environments worldwide, including Canada, the UK, South Korea, and Australia.
Unlike traditional object recognition systems that simply identify items such as ladders or hard hats, generative AI-powered tools like Safety AI use visual language models (VLMs) to reason about complex scenes and contextual risks. By training on a curated set of real-world violation imagery, these VLMs can assess nuanced safety concerns, such as improper ladder usage—a frequent cause of fatal site accidents. Human experts support the AI by conducting strategic questioning and refining prompt engineering, allowing the model to break down scenes and draw conclusions much like an experienced safety inspector. While the tool is a supplemental aid, it still depends on oversight from skilled professionals and has yet to overcome persistent challenges, including rare edge cases and issues with spatial reasoning that can limit detection reliability.
Despite generative AI’s promise, some in the industry remain cautious. Researchers point out that while current visual language models rival human performance in basic object detection, they struggle with 3D scene interpretation and lack inherent common sense, which can lead to dangerous oversights. As a result, some competitors, like Safeguard AI in Jerusalem and Buildots in Tel Aviv, continue to favor classical machine learning approaches, prioritizing reliability and avoiding model hallucinations. Critics and legal scholars also warn that AI tools should augment, not replace, human safety managers, to prevent critical warnings from being missed and ensure that technology is a partner rather than an autocratic overseer. Workers themselves express concerns over privacy and the potential misuse of surveillance, bringing important ethical questions to the fore as Artificial Intelligence is further woven into construction safety protocols.