Humanoid robots require new safety standards as they enter human environments

As humanoid robots step out of factories and into daily life, fresh safety standards are needed to manage both physical and psychological risks posed by advanced Artificial Intelligence.

Humanoid robots like Digit are increasingly being deployed in real-world settings, such as warehouses, to assist with physically demanding tasks. Digit, which debuted handling boxes in a Spanx warehouse, operates in spaces cordoned off from humans by barriers and sensors. This separation is necessary due to stability concerns—despite advancements, these robots can topple over or inadvertently strike humans, creating significant safety hazards. The most pressing issue identified by experts is physical stability: unlike traditional industrial robots that simply stop when powered down, humanoid robots can fall, amplifying the danger to anyone nearby.

Industry groups such as the IEEE Humanoid Study Group argue that current safety standards fall short given the unique characteristics of these machines. Humanoids are ´dynamically stable,´ requiring constant power and active control to remain upright. To address this, companies like Agility Robotics and Boston Dynamics are exploring new protocols—such as gentle deceleration instead of abrupt stops and systems that allow robots to safely lower themselves before powering off. However, implementing safety measures without stifling manufacturer innovation is a challenge, especially when defining what exactly constitutes a humanoid robot based on function, not mere appearance.

Beyond physical safety, there are communication and psychological issues at play. As humanoid robots begin interacting more closely with people, there is a need for standard signaling methods to ensure their intentions are clear—much like cars use lights and indicators. People´s tendency to anthropomorphize robots heightens expectations for emotional intelligence, potentially leading to disappointment or unsafe behavior if a robot´s abilities are misjudged. Recommendations from the IEEE group include standardized visual and audio cues, human override options, and aligning robots´ appearance with their capabilities to prevent confusion. These considerations grow even more critical as robots move beyond industrial settings into homes, hospitals, and public spaces, where they may engage with vulnerable populations or those with special communication needs.

Establishing robust, flexible safety standards for humanoid robots is seen as essential for building public trust and facilitating global commercialization. The IEEE and ISO are working toward setting baseline requirements that address both physical and psychosocial risks without curbing creativity or technological advancement. While consensus among stakeholders may be elusive, experts argue that a minimal, industry-wide safety standard is vital for the responsible integration of humanoid robots into society. Transparent guidelines will help ensure these advanced robots can coexist safely with people and meet the diverse expectations placed upon them as Artificial Intelligence technologies continue to evolve.

82

Impact Score

FLUX.2 image generation models now released, optimized for NVIDIA RTX GPUs

Black Forest Labs, the frontier Artificial Intelligence research lab, released the FLUX.2 family of visual generative models with new multi-reference and pose control tools and direct ComfyUI support. NVIDIA collaboration brings FP8 quantizations that reduce VRAM requirements by 40% and improve performance by 40%.

Aligning VMware migration with business continuity

Business continuity planning long focused on physical disasters, but cyber incidents, particularly ransomware, are now more common and often more damaging. In a survey of more than 500 CISOs, almost three-quarters (72%) said their organization had dealt with ransomware in the previous year.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.