Artificial intelligence detects suicide risk missed by standard assessments

Researchers at Touro University report that an Artificial intelligence tool using large language models detected signals of perceived suicide risk that standard multiple-choice assessments missed. The study applied Claude 3.5 Sonnet to audio interview responses and compared model outputs with participants' self-rated likelihood of attempting suicide.

Researchers at Touro University published a study in the Journal of Personality Assessment showing that an Artificial intelligence tool can identify nuances in speech linked to perceived suicide risk that conventional assessments often miss. Lead author Yosef Sokol, PhD, and colleagues argue that typical multiple-choice measures lack the nuance to capture how people think and feel about their future, and that directly asking about suicide can suppress honest responses. The team focused on future self-continuity, the sense of connection between a person’s present and future self, which the authors say is closely tied to suicidal thinking.

The research used large language models (LLM) to analyze audio responses to 15 interview prompts about participants’ lives and futures. The model tested was Claude 3.5 Sonnet. The study sample included 164 participants, 93 of whom reported past-year suicidal ideation. Participants also provided a self-rated perceived risk on a 1-to-7 scale for how likely they thought they might attempt suicide in the future. The researchers compared LLM-derived signals and standard assessment tools against that self-reported perceived risk to evaluate predictive alignment.

The LLM identified speech features standard tools overlooked, including coherence when describing the future, emotional tone, and level of specific detail. According to the researchers, those signals produced a stronger measure of future self-continuity and better predicted participants’ own ratings of their risk. The authors note the study compared model outputs to perceived risk rather than verified attempts, but they emphasize perceived risk is clinically valuable because it predicts later suicidal behavior. They suggest LLM-driven scoring could be deployed in hospitals, crisis hotlines, or therapy sessions-potentially using a brief set of recorded questions to generate a risk score-and may also aid detection of depression and anxiety.

55

Impact Score

Nvidia DGX SuperPOD sets stage for Rubin artificial intelligence systems

Nvidia is positioning its DGX SuperPOD as the reference architecture for large-scale systems built on the new Rubin platform, which unifies six chips into a single artificial intelligence supercomputing stack. The company is targeting demanding agentic artificial intelligence workloads, mixture-of-experts models and long-context reasoning across enterprise and research deployments.

Intel launches core ultra series 3 panther lake processors on intel 18a node

Intel has introduced its core ultra series 3 panther lake mobile processors at CES, positioning them as the first artificial intelligence PC platform built on the intel 18a process and produced in the United States. The lineup targets thin and light laptops with integrated arc graphics and dedicated neural processing for artificial intelligence workloads.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.