Amazon Devices advances zero-touch manufacturing with NVIDIA digital twins

Amazon Devices & Services is using NVIDIA digital twin technologies and Artificial Intelligence to train robotic arms on synthetic data, enabling zero-touch manufacturing and faster product integration.

Amazon Devices & Services has deployed a simulation-first manufacturing solution this month that pairs Amazon-created software with NVIDIA digital twin technologies to move toward zero-touch production. The system trains robotic arms to inspect a range of products for quality auditing and to integrate new goods onto the line without hardware changes. Amazon´s approach relies heavily on synthetic data and a modular workflow, aiming to replace time-consuming physical prototyping with virtual, repeatable experiments.

The technical pipeline stitches together several NVIDIA stacks and Amazon cloud services. Amazon imports CAD models into NVIDIA Isaac Sim on the Omniverse platform, then generates more than 50,000 synthetic images per device to train object- and defect-detection models. NVIDIA Isaac ROS produces robotic trajectories, while cuMotion and the nvblox library support fast collision-free planning on NVIDIA Jetson AGX Orin modules. FoundationPose, a foundation model trained on 5 million synthetic images, provides pose estimation and object tracking that can generalize to unseen objects. On the cloud side, AWS accelerated model development using Amazon EC2 G6 instances and AWS Batch; Amazon Bedrock and Bedrock AgentCore handle higher-level task planning and ingest multimodal product specifications.

That collection of tools is designed to enable what Amazon describes as ´zero-shot manufacturing´ and a broader move to generalized manufacturing. Lines can switch from auditing one product to another with software updates alone. The modular design already supports defect detection during production and is built to integrate more advanced reasoning components in the future, including NVIDIA Cosmos Reason for deeper analysis and decision-making. Eliminating the need for physical prototypes reduces cost and shortens the time it takes to bring new devices to consumers.

The deployment at an Amazon Devices facility demonstrates a path from simulation to the real world, with simulated stations matching real ones for training and validation. The result is faster on-boarding of new products, more flexible robotic operations, and a clearer route to autonomous, software-driven manufacturing pipelines that scale across products and stations.

76

Impact Score

Red Hat Artificial Intelligence 3 tackles inference complexity

Red Hat introduced Red Hat Artificial Intelligence 3 to move enterprise models from pilots to production, with a strong focus on scalable inference on Kubernetes. The release adds llm-d, a unified API on Llama Stack, and tools for Model-as-a-Service delivery.

Nvidia DGX Spark arrives for world’s Artificial Intelligence developers

Nvidia is shipping DGX Spark, a compact desktop system that delivers a petaflop of Artificial Intelligence performance and unified memory to bring large model development and agent workflows on premises. Partner systems from major PC makers and channel partners broaden availability starting Oct. 15.

EU regulatory developments on the Artificial Intelligence Act

The European Commission finalized a General Purpose Artificial Intelligence Code of Practice and signaled phased enforcement of the Artificial Intelligence Act. Companies gain transitional breathing room but should use it to align with new transparency, copyright, and safety expectations.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.