Artificial intelligence detects suicide risk missed by standard assessments

Researchers at Touro University report that an Artificial intelligence tool using large language models detected signals of perceived suicide risk that standard multiple-choice assessments missed. The study applied Claude 3.5 Sonnet to audio interview responses and compared model outputs with participants' self-rated likelihood of attempting suicide.

Researchers at Touro University published a study in the Journal of Personality Assessment showing that an Artificial intelligence tool can identify nuances in speech linked to perceived suicide risk that conventional assessments often miss. Lead author Yosef Sokol, PhD, and colleagues argue that typical multiple-choice measures lack the nuance to capture how people think and feel about their future, and that directly asking about suicide can suppress honest responses. The team focused on future self-continuity, the sense of connection between a person’s present and future self, which the authors say is closely tied to suicidal thinking.

The research used large language models (LLM) to analyze audio responses to 15 interview prompts about participants’ lives and futures. The model tested was Claude 3.5 Sonnet. The study sample included 164 participants, 93 of whom reported past-year suicidal ideation. Participants also provided a self-rated perceived risk on a 1-to-7 scale for how likely they thought they might attempt suicide in the future. The researchers compared LLM-derived signals and standard assessment tools against that self-reported perceived risk to evaluate predictive alignment.

The LLM identified speech features standard tools overlooked, including coherence when describing the future, emotional tone, and level of specific detail. According to the researchers, those signals produced a stronger measure of future self-continuity and better predicted participants’ own ratings of their risk. The authors note the study compared model outputs to perceived risk rather than verified attempts, but they emphasize perceived risk is clinically valuable because it predicts later suicidal behavior. They suggest LLM-driven scoring could be deployed in hospitals, crisis hotlines, or therapy sessions-potentially using a brief set of recorded questions to generate a risk score-and may also aid detection of depression and anxiety.

55

Impact Score

Saudi Artificial Intelligence startup launches Arabic LLM

Misraj Artificial Intelligence unveiled Kawn, an Arabic large language model, at AWS re:Invent and launched Workforces, a platform for creating and managing Artificial Intelligence agents for enterprises and public institutions.

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.