Desinformation early detection of harmful online news trends


Disinformation poses a major challenge to our society. Concerted disinformation campaigns are one aspect of hybrid threats, which can aim to disrupt or damage specific critical infrastructures – such as the distribution security of energy sources, raw materials or medicines – or to undermine broader critical infrastructures such as democratic institutions and destroy trust in them or their representatives.

Timely detection of disinformation campaigns is therefore an essential contribution to resilience against such threats. Currently, however, there are hardly any tools available to actively detect disinformation campaigns at an early stage. Those affected often learn about their involvement far too late, which limits their ability to respond effectively. Often, only damage limitation remains. Early detection of such trends would provide room for maneuver, e.g. to prepare appropriate counter-narratives. DesinFact aims to improve the state of research on automatic detection of disinformation trends, to identify gaps in technical, legal and ethical areas, and to develop suitable approaches to enable such a system.

To detect disinformation campaigns, different data sources need to be monitored to identify trends. In order to then automatically assess these as disinformation campaigns, approaches must be applied that meet the highest quality standards, since an erroneous decision – namely that fake news content is actually being disseminated here – can rebound on the authors or their institutions, and thus considerably damage their reputation. Likewise, this damages the reputation of the providers of such technologies – i.e., the consortium partners involved – as well as the public trust in such a technology.

Therefore, the focus in DesinFact is on increasing trustworthiness in technical, legal and ethical aspects as a main focus of the research activity. Methods for measurable quality improvement or explainability of decisions are to be researched. These methods should be understandable for experts as well as for operational operators.

One aspect of increasing accuracy is to link the analysis of network structures and communication patterns with content-based analysis. To this end, DesinFact will explore methods for detecting dissemination channels and key actors in disinformation networks and combine them with content assessment methods. Another focus of DesinFact is the research of a possible public provision of a system for the detection of disinformation. Such a system should enable citizens to have content checked for disinformation online. DesinFact will explore socio-technical aspects that are relevant for an adequate implementation of such a technology.

However, since disinformation is a highly complex task whose assessment depends on numerous factors, such as age, general education, or cultural, political, and religious background, controversial decisions can hardly be avoided. Accordingly, both the evaluation systems and the presentation of results must be clear and understandable. Corresponding interdisciplinary studies are central contents of DesinFact.


02.01.2024 – 
Fariba Karimi, Faculty Member at the Complexity Science Hub © Matthias Raddant

Fariba Karimi

Bernhard Haslhofer, faculty member at the Complexity Science Hub © Anja Böck

Bernhard Haslhofer

Funded by

Project Partners

AIT Austrian Institute of Technology
Federal Chancellery
Federal Ministry of Defense
leiwand AI gmbh
University for Continuing Education Krems
X-Net Services GmbH
0 Pages 0 Press 0 News 0 Events 0 Projects 0 Publications 0 Person 0 Visualisation 0 Art


CSH Newsletter

Choose your preference
Data Protection*