(click to copy)


A machine learning approach to detect potentially harmful and protective suicide-related content in broadcast media

Suicide-related media content has preventive or harmful effects depending on the specific content. Proactive media screening for suicide prevention is hampered by the scarcity of machine learning approaches to detect specific characteristics in news reports. This study applied machine learning to label large quantities of broadcast (TV and radio) media data according to media recommendations reporting suicide.

We manually labeled 2519 English transcripts from 44 broadcast sources in Oregon and Washington, USA, published between April 2019 and March 2020. We conducted a content analysis of media reports regarding content characteristics. We trained a benchmark of machine learning models including a majority classifier, approaches based on word frequency (TF-IDF with a linear SVM) and a deep learning model (BERT).

We applied these models to a selection of more simple (e.g., focus on a suicide death), and subsequently to putatively more complex tasks (e.g., determining the main focus of a text from 14 categories). Tf-idf with SVM and BERT were clearly better than the naive majority classifier for all characteristics.

In a test dataset not used during model training, F1-scores (i.e., the harmonic mean of precision and recall) ranged from 0.90 for celebrity suicide down to 0.58 for the identification of the main focus of the media item. Model performance depended strongly on the number of training samples available, and much less on assumed difficulty of the classification task.

This study demonstrates that machine learning models can achieve very satisfactory results for classifying suicide-related broadcast media content, including multi-class characteristics, as long as enough training samples are available. The developed models enable future large-scale screening and investigations of broadcast media.

H. Metzler, H. Baginski, D. Garcia, T. Niederkrotenthaler, A machine learning approach to detect potentially harmful and protective suicide-related content in broadcast media, PLOS ONE 19(5) (2024) e0300917.

Hannah Metzler, researcher at the Complexity Science Hub © Verena Ahne

Hannah Metzler

David Garcia

Thomas Niederkrotenthaler

0 Pages 0 Press 0 News 0 Events 0 Projects 0 Publications 0 Person 0 Visualisation 0 Art


CSH Newsletter

Choose your preference
Data Protection*