A suicide risk prediction model that uses data from electronic health records accurately stratifies risk without adding to clinicians’ workload, results of a new study suggest.
The findings add to evidence supporting use of suicide risk prediction models to augment traditional clinician assessments such as self-report questionnaires, lead author Andrea Kline-Simon, MS, senior data consultant, Division of Research, Kaiser Permanente Northern California (KPNC), Oakland, told Medscape Medical News.
“Overall, this study suggests these models could supplement clinicians’ current work and be set in a way that might not impede workload,” she said.
The study was published online October 21 in JAMA Network Open.
A Serious and Growing Problem
To address the serious and growing problem of suicide across the United States, behavioral health professionals need the best possible information and tools to identify patients at risk so they can intervene early, said Kline-Simon.
The investigators wanted to validate a suicide risk prediction model developed by the Mental Health Research Network (MHRN) using data from 20 million mental health care visits across seven health systems.
The model uses electronic health record measures, including demographic characteristics, Patient Health Questionnaire-9 item scores, comorbidities, medications, mental health visits, and suicide attempts in the years before the encounter date.
These values are used to create a risk score, and a higher score indicates a higher predicted risk for a suicide attempt, said Kline-Simon.
First, the researchers validated MHRN’s suicide-risk model using KPNC data to confirm the predictive performance of the model among patients not included in the model development.
The study included mental health encounters at KPNC, an integrated health care-delivery system serving 4.3 million members.
Over 1 year, they identified 1,408,683 mental health encounters (254,779 unique patients). Patients were a mean age of 40.7 years, 35.3% were men, and 24.8% were Hispanic or Black. About 0.6% of patients attempted suicide within 90 days of a visit.
Results showed the model was quite accurate. The 95th percentile cut point had a sensitivity of 41.3% (95% CI, 39.5% – 43.3%) and positive predictive value of 6.4% (95% CI, 6.2% – 6.7%).
No Alert Fatigue
The researchers calculated the expected number of alerts at differing risk thresholds, ranging from the top 5% to the top 0.5% of scores, to help understand “the real-life impact” of the system, said Kline-Simon.
“In healthcare, alert fatigue, or the state of being desensitized by a large number of frequent alerts, is a real danger and can easily overwhelm and distract clinicians,” she noted
The median number of daily mental health visits with alerts varied widely depending on the risk threshold set for the alerts. For example, at the 95th percentile of risk there would be 162 daily alerts, while at the 99.5th percentile of risk there would be only four daily alerts.
Kline-Simon believes a risk prediction model will provide one of “the best possible tools” to identify patients at risk for suicide. This could be a boon to physicians overwhelmed by “incredibly detailed” electronic health records that are “filled with huge amounts of data,” she said.
“With predictive models, we can bring together many parts of a patient’s health record into a single score and create an opportunity to identify risk signals that are not as easily apparent during routine care,” said Kline-Simon.
She noted that predictive models will “supplement” a clinician’s work “by highlighting areas of higher risk that are difficult to tease out otherwise.”
She emphasized that risk prediction models “do a better job of identifying risk than tools such as commonly used self-report questionnaires can do alone.”
However, before this or another model can be implemented, a number of clinical, ethical, legal, and other questions need to be addressed, said Kline-Simon.
The authors note the findings may not be generalizable to all health care systems. In addition, the efficacy of interventions associated with suicide risk alerts remains uncertain, they add.
Key Challenges Remain
In an accompanying editorial, Roy H. Perlis, MD, Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School, and Stephan D. Fihn, MD, Department of Medicine, University of Washington, Seattle, note the model used in the study is “highly accurate” but “embodies key challenges” in suicide screening.
“The positive predictive value of their model is 6%, which means that 17 individuals would need to receive an intervention to prevent a single suicide attempt,” they write.
High false-positive rates have posed a challenge in screening efforts that have not been solved by machine learning prediction models, say the editorialists.
They stress that for screening efforts to be clinically useful, there must also be effective and accessible interventions, which involve adequate resources to ensure diagnosis and treatment.
“Otherwise, the expense and burden on patients, families, clinicians, and staff are to no avail.”
The authors and editorialists have reported no relevant financial disclosures.