Complexity Science Hub scientist Fariba Karimi develops a roadmap to connect complexity science research to digital humanism.

22.06.2021

News

Digital Humanism: CSH successful with three projects

DIGITAL HUMANISM: A NEW RESEARCH INITIATIVE IN VIENNA

In fall 2020, the Vienna Science, Research and Technology Fund WWTF made a call for projects on “Digital Humanism” with the aim to “support strongly interdisciplinary research projects dealing with social networks, questions of democracy policy, or the use of artificial intelligence in care.”

The CSH is proud to announce that three of the nine successful projects were filed by (or with) Hub scientists:

 

1. “EMOTIONAL MISINFORMATION—THE INTERPLAY OF EMOTION AND MISINFORMATION SPREADING ON SOCIAL MEDIA”

(BY HANNAH METZLER, ET AL.)

 

2. “HUMANIZED ALGORITHMS: IDENTIFYING AND MITIGATING ALGORITHMIC BIASES IN SOCIAL NETWORKS”

(BY FARIBA KARIMI, ET AL.)

 

3. “TRANSPARENT AUTOMATED CONTENT MODERATION (TACO)”

(BY SOPHIE LECHELER, ET AL.)

 

PROJECT 1.

EMOTIONAL MISINFORMATION—THE INTERPLAY OF EMOTION AND MISINFORMATION SPREADING ON SOCIAL MEDIA

PROJECT LEAD: HANNAH METZLER

TEAM:

Hannah Metzler (CSH); Annie Waldherr (University of Vienna); David Garcia (CSH Vienna & Graz Technological University)

WHAT IS THE PROJECT ABOUT?

The spreading of misinformation via social media contributes to a global threat to trust in science and democratic institutions, with consequences for public health and societal conflicts. The fact that emotion biases information processing suggests a link between certain emotional states and misinformation spreading, which becomes visible especially in situations of high uncertainty.

This project aims at understanding how emotions influence the tendency to believe and share inaccurate content, and to test intervention strategies to mitigate emotional misinformation spreading.

We combine three approaches:

  1. characterizing the dynamics of emotional misinformation spreading using large-scale social media data,
  2. experimentally testing the potential of individual emotion regulation interventions to reduce misinformation sharing, and
  3. integrating both sources of evidence to inform an agent-based model.


This allows identifying the most promising interventions to reduce misinformation spreading at the macro scale, and simulating how algorithmic filters for emotional information affect the spreading of misinformation through social networks.

Together, these results will provide guidance on how to analyze and adapt information and communication technologies taking into account human ways of information processing. This project approaches the problem of misinformation spreading from a radically new angle, directly tackling emotional influences on attention and sharing behavior.

PROJECT 2.

HUMANIZED ALGORITHMS: IDENTIFYING AND MITIGATING ALGORITHMIC BIASES IN SOCIAL NETWORKS

PROJECT LEAD: FARIBA KARIMI

TEAM:

Fariba Karimi (CSH & CEU); Markus Strohmaier (RWTH Aachen & CSH External Faculty), Anna Koltai (Center for Social Sciences; Hungary)

PROJECT SUMMARY:

Many ranking and recommendation algorithms rely on user-generated social network data. For example, social media platforms such as Twitter or LinkedIn use this information in order to rank people, and to recommend new social links to the users. These networks that are generated by people are driven by fundamental social mechanisms such as popularity and homophily, and they often contain diverse socio-demographic attributes of people. These attributes play an important role in the way individuals interact with others, and thus they can shape the structure of networks. More importantly, the structure of networks has a decisive role in dynamical processes on networks such as diffusion of information, formation and evolution of biases, norms, and culture.

However, very little is known about the effect of network structure on algorithms, the extent to which machine learning methods amplify social biases, and whether practical approaches to mitigate network- based algorithmic biases exist.

The overarching aim of this project is to study the role of ranking and recommendation algorithms in social networks and their effects on disadvantaged groups. To achieve this goal, the research plan contains three crucial components:

  1. identifying structural conditions of attributed social networks that can introduce biases in ranks,
  2. a comprehensive analysis of different network-based ranking and recommendation algorithms and their contribution to reinforcing social inequalities,
  3. investigating strategies to mitigate the emergence of biases in ranking and recommendation algorithms.


The findings of this proposal will break new ground in responsible and explainable AI and will contribute to a more humanistic approach to algorithms in order to prevent the reinforcement of discrimination in society.

PROJECT 3.

TRANSPARENT AUTOMATED CONTENT MODERATION (TACO)

PROJECT LEAD: SOPHIE LECHELER (UNIVERSITY OF VIENNA)

TEAM:

Sophie Lecheler; Allan Hanbury (TU Wien & CSH Vienna)

PROJECT SUMMARY:

Online political discussions are increasingly perceived as negative, aggressive, and toxic. This is a worry, because exposure to toxic talk undermines trust and fosters cynicism, leading to a polarized society. Defining what could be considered “toxic” is therefore one of the most pressing challenges for researchers today, because such a definition may be used to develop (semi-) automated content moderation systems that ensure healthy political conversations on a global scale.

However, the available research on toxic talk and content moderation is elite-driven and imposes top-down definitions of what is “good” or “bad” on users. This has resulted in biased content moderation models, and it has damaged the reputation of those who have implemented them. More importantly, however, a top-down approach removes agency from citizens in a time when many already feel they have too little influence on their daily information intake.

Therefore, TACo proposes a novel user-centric approach towards automated content moderation. We conduct qualitative and exploratory social science research to learn what citizens themselves want, when it comes to toxic talk and content moderation. Then, we develop moderation scenarios based on this knowledge, testing for usefulness and reliability of models.

Finally, we test whether what people “want” truly has beneficial effects for them: we conduct experiments that test the effects of these models on citizens’ political trust, engagement, and well-being.

“It is very exciting to be approaching this important challenge in a highly inter-disciplinary way,” says Allan.

Researchers

Related

0 Pages 0 Press 0 News 0 Events 0 Projects 0 Publications 0 Person 0 Visualisation 0 Art

Signup

CSH Newsletter

Choose your preference
   
Data Protection*