News

Fariba Karimi Awarded ERC Starting Grant

Karimi will explore how artificial intelligence in online social networks contributes to social inequality, aiming to develop fairer algorithms.

The use of AI in social networks—such as recommendation systems and timelines on platforms like LinkedIn, Google Scholar, and X—has been linked to increased social inequality and discrimination. Fariba Karimi, a faculty member at the Complexity Science Hub and professor at Graz University of Technology (TU Graz), is set to investigate these dynamics through her project, “NetFair – Network Fairness.”

“Our goal is to understand how algorithms perpetuate inequality and discrimination and, more importantly, to develop strategies to prevent it,” Karimi explains. Her five-year project has been funded by the European Research Council’s Starting Grant, securing nearly 1.5 million euros.

Fariba Karimi leads the research team on Algorithmic Fairness at CSH and is a professor of Social Data Science at TU Graz.

Born in 1981 in Tehran, Iran, Fariba Karimi studied physics at Shiraz University, Shahid Beheshti University in Tehran, and Lund University in Sweden. In 2015, she earned her PhD in physics and computer science from Umeå University, Sweden, and subsequently held a postdoctoral position at the Leibniz Institute for the Social Sciences in Cologne.

In 2023, she was honored with the Young Scientist Award from the German Physical Society for her work on inequality in complex networks. The previous year, she and a team of international researchers received an EU Horizon grant to investigate multi-criteria fairness in AI systems.

ALGORITHMS WORK WITH WHAT THEY KNOW

Even before the advent of online platforms, women, minorities, and individuals with disabilities faced disadvantages in professional and educational settings, partly due to their marginalized positions in social offline networks or their comparatively low social capital. AI algorithms on these platforms often mirror these societal biases and use this information in their recommendation systems and timelines.

As a result, AI-based systems on online platforms are more likely to suggest connections with more central individuals in social networks, while rendering marginalized individuals even less visible. However, the exact mechanisms behind this effect in AI systems are not yet fully understood.

MAKING IT QUANTIFIABLY MEASURABLE

Social inequality and marginalization are based on a complex interplay of various social characteristics such as gender, origin, or income— a concept known as intersectionality in social sciences. “So far, there are only qualitative findings on intersectional inequality in social networks,” says Karimi. In her ERC project, she aims to make intersectionality quantifiably measurable and then apply it to AI-based social online platforms.

TOWARDS FAIR ALGORITHMS

To achieve this, Karimi will first develop improved models of social networks and use data analysis and experiments to clarify which factors play what role and how they influence each other. “We will use these improved models to study their impact on algorithms and online platforms over time,” Karimi notes. But that’s not all: Karimi wants to develop methods in her project that not only prevent but also reduce inequalities and discrimination in online networks. “That’s the ultimate goal: fair algorithms for social networks.”

Researchers

Related

0 Pages 0 Press 0 News 0 Events 0 Projects 0 Publications 0 Person 0 Visualisation 0 Art

Signup

CSH Newsletter

Choose your preference
   
Data Protection*