Artificial Intelligence Improves Identification of Child Abuse in Emergency Departments

Artificial Intelligence offers greater accuracy in detecting child abuse in emergency rooms, outperforming traditional diagnostic coding methods.

A new study presented at the Pediatric Academic Societies 2025 Meeting demonstrates that artificial intelligence substantially enhances the detection of physical child abuse in emergency departments. Researchers designed a machine learning model that analyzes diagnostic codes related to high-risk injuries and physical abuse indicators, and compared its accuracy to conventional methods relying strictly on diagnostic codes assigned by healthcare providers or administrative staff.

The traditional diagnostic coding system routinely misses approximately 8.5% of child abuse cases, often due to limitations in how healthcare providers assign specific codes. The machine learning model, however, provided more reliable estimates and identified trends more effectively, offering reduced estimation errors across multiple hospital sites. The study involved 3,317 emergency department cases concerning children under age 10, with nearly three-quarters of cases involving children under 2 years old. The research revealed that using only abuse-specific codes overestimated prevalence, while the machine learning approach kept error rates at a minimum, ranging from -3.0% to 2.6% (average absolute error 1.8%).

Researchers conducted a secondary analysis of cases evaluated between February 2021 and December 2022 at seven children’s hospitals participating in large-scale child abuse research networks. Their methodology utilized LASSO logistic regression models to predict likelihood of physical abuse, integrating both injury and abuse-specific codes. The improved accuracy could enable earlier intervention, better treatment, and improved outcomes for vulnerable children. Study authors emphasize that artificial intelligence-powered tools offer significant potential for advancing both clinical care and research in sensitive and complex domains like child abuse detection.

71

Impact Score

OpenAI launches GPT-5.4-Cyber for cyber defense

OpenAI has introduced GPT-5.4-Cyber and expanded its Trusted Access for Cyber program to support cybersecurity defenders. The company is pairing broader defensive capabilities with tighter identity verification to limit misuse.

Uk regulators and banks assess cybersecurity risks from Anthropic model

UK financial regulators and major banks are assessing cybersecurity risks linked to Claude Mythos Preview, Anthropic’s new Artificial Intelligence model. Officials are co-ordinating with industry and national security bodies as concerns grow over the model’s ability to uncover critical system vulnerabilities.

Debate over Europe’s Artificial Intelligence ambitions intensifies

Discussion around Europe’s Artificial Intelligence strategy centered on whether the region is being held back by capital, culture, regulation, or fragmentation. Mistral’s push for a European playbook drew both support for digital sovereignty and criticism that it reads like a bid for political backing.

Anthropic restricts Claude Mythos over cybersecurity risks

Anthropic is limiting access to Claude Mythos Preview after warning that the model can identify and exploit severe software vulnerabilities. Banks, cybersecurity firms, and government officials are now evaluating how defensive use of the system can be balanced against the risks of misuse.

ASML raises EUV shipment target as memory demand grows

ASML plans to ship over 60 EUV lithography systems in 2026, up from 48 in 2025, as memory makers expand capacity for Artificial Intelligence data center demand. South Korea accounted for 45% of Q1 2026 revenue, reflecting strong purchases from major memory producers.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.