Researchers at Touro University published a study in the Journal of Personality Assessment showing that an Artificial intelligence tool can identify nuances in speech linked to perceived suicide risk that conventional assessments often miss. Lead author Yosef Sokol, PhD, and colleagues argue that typical multiple-choice measures lack the nuance to capture how people think and feel about their future, and that directly asking about suicide can suppress honest responses. The team focused on future self-continuity, the sense of connection between a person’s present and future self, which the authors say is closely tied to suicidal thinking.
The research used large language models (LLM) to analyze audio responses to 15 interview prompts about participants’ lives and futures. The model tested was Claude 3.5 Sonnet. The study sample included 164 participants, 93 of whom reported past-year suicidal ideation. Participants also provided a self-rated perceived risk on a 1-to-7 scale for how likely they thought they might attempt suicide in the future. The researchers compared LLM-derived signals and standard assessment tools against that self-reported perceived risk to evaluate predictive alignment.
The LLM identified speech features standard tools overlooked, including coherence when describing the future, emotional tone, and level of specific detail. According to the researchers, those signals produced a stronger measure of future self-continuity and better predicted participants’ own ratings of their risk. The authors note the study compared model outputs to perceived risk rather than verified attempts, but they emphasize perceived risk is clinically valuable because it predicts later suicidal behavior. They suggest LLM-driven scoring could be deployed in hospitals, crisis hotlines, or therapy sessions-potentially using a brief set of recorded questions to generate a risk score-and may also aid detection of depression and anxiety.
