Artificial Intelligence Improves Identification of Child Abuse in Emergency Departments

Artificial Intelligence offers greater accuracy in detecting child abuse in emergency rooms, outperforming traditional diagnostic coding methods.

A new study presented at the Pediatric Academic Societies 2025 Meeting demonstrates that artificial intelligence substantially enhances the detection of physical child abuse in emergency departments. Researchers designed a machine learning model that analyzes diagnostic codes related to high-risk injuries and physical abuse indicators, and compared its accuracy to conventional methods relying strictly on diagnostic codes assigned by healthcare providers or administrative staff.

The traditional diagnostic coding system routinely misses approximately 8.5% of child abuse cases, often due to limitations in how healthcare providers assign specific codes. The machine learning model, however, provided more reliable estimates and identified trends more effectively, offering reduced estimation errors across multiple hospital sites. The study involved 3,317 emergency department cases concerning children under age 10, with nearly three-quarters of cases involving children under 2 years old. The research revealed that using only abuse-specific codes overestimated prevalence, while the machine learning approach kept error rates at a minimum, ranging from -3.0% to 2.6% (average absolute error 1.8%).

Researchers conducted a secondary analysis of cases evaluated between February 2021 and December 2022 at seven children’s hospitals participating in large-scale child abuse research networks. Their methodology utilized LASSO logistic regression models to predict likelihood of physical abuse, integrating both injury and abuse-specific codes. The improved accuracy could enable earlier intervention, better treatment, and improved outcomes for vulnerable children. Study authors emphasize that artificial intelligence-powered tools offer significant potential for advancing both clinical care and research in sensitive and complex domains like child abuse detection.

71

Impact Score

Finance officials raise banking security concerns over Anthropic’s mythos model

Anthropic’s Claude Mythos has prompted urgent discussions among finance ministers, central bankers and banks over the risk that advanced cyber capabilities could expose weaknesses in critical financial systems. Governments and financial institutions are being given early access to test and strengthen defences before any broader release.

Uk delays Artificial Intelligence copyright reform

The UK government has postponed immediate copyright reform for Artificial Intelligence, leaving developers, creatives, and rightsholders to operate under existing law. Licensing, transparency, digital replicas, and future litigation are now set to shape the next phase of policy.

Memory architecture is central to autonomous llm agents

Memory design, not just model choice, determines whether autonomous agents can sustain context, learn from experience, and stay reliable over time. A practical framework centers on how information is written, managed, and read across multiple memory types.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.