Artificial Intelligence improves diagnosis of invisible airway blockages

Researchers at the University of Southampton developed an Artificial Intelligence tool that detects radiolucent foreign bodies on chest CT scans more reliably than experienced radiologists. The model combines an airway mapping technique with a neural network and was validated against bronchoscopy-confirmed cases.

Researchers at the University of Southampton have developed an Artificial Intelligence tool that identifies hard-to-see objects lodged in patients’ airways and published the results in npj Digital Medicine. The work was led by Dr Yihua Wang, Dr Zehor Belkhatir and Prof Rob Ewing in collaboration with teams in Wuhan, china. The team says the model is intended to support radiologists by flagging subtle cases that can be missed on routine imaging.

Foreign body aspiration occurs when material such as plant matter or shell fragments becomes trapped in the airway. When those objects are radiolucent they are faint or invisible on X-rays and even on CT scans, which contributes to missed or delayed diagnoses and can lead to serious complications. The researchers note that up to 75 per cent of foreign body aspiration cases in adults involve radiolucent items, creating a clear diagnostic challenge.

To address this, the group built a deep learning model that combines a high-precision airway mapping technique (MedpSeg) with a neural network that analyses chest CT images for hidden signs of foreign bodies. The model was trained and tested on three independent patient groups totalling more than 400 patients in collaboration with hospitals in china. For a direct comparison, three radiologists with more than ten years’ experience reviewed 70 CT scans, 14 of which were confirmed radiolucent foreign body aspiration cases by bronchoscopy.

The model achieved higher sensitivity than the radiologists, detecting 71 per cent of confirmed cases compared with 36 per cent for the human readers. Radiologists recorded 100 per cent precision with no false positives, while the model had 77 per cent precision and produced some false positives. On the combined F1 metric, which balances precision and recall, the model scored 74 per cent versus 53 per cent for the radiologists. The authors emphasise the system is designed to assist, not replace, clinicians and plan multi-centre studies with larger, diverse populations to refine the model and reduce bias. “These objects can be extremely subtle and easy to miss, even for experienced clinicians,” said PhD researcher Zhe Chen. “The results demonstrate the real-world potential of Artificial Intelligence in medicine, particularly for conditions that are difficult to diagnose through standard imaging,” said Dr Yihua Wang. The study was supported by the uk medical research council and the china scholarship council and is available online via npj Digital Medicine (doi: 10.1038/s41746-025-02097-w).

52

Impact Score

RenderFormer: how neural networks are reshaping 3D rendering

RenderFormer, from Microsoft Research, is the first model to show that a neural network can learn a complete graphics rendering pipeline. It is designed to support full-featured 3D rendering using only machine learning and no traditional graphics computation.

Training without consent is risky business: what business owners need to know about the proposed Artificial Intelligence Accountability and Data Protection Act

The proposed Artificial Intelligence Accountability and Data Protection Act would create a federal private right of action for use of individuals’ personal or copyrighted data without express consent, exposing companies that train models without permission to new liability. The bill would broaden covered works beyond registered copyrights and allow substantial remedies including compensatory, punitive and injunctive relief.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.