Complexity Science Hub researcher Fariba Karimi leads the EU project MAMMOth to tackle discrimination through AI

17.11.2022

News

New EU project to reduce discrimination trough AI

MAMMOTH CALLS THE NEW EU PROJECT AND ACTUALLY FACES A MAMMOTH TASK.

AI opens up many possibilities. At the same time, it can force discriminatory decision-making – for example, in education, job application processes or advertising. The MAMMOth project aims to change that.

CSH scientist Fariba Karimi and her team are developing fairness measurements that take into account not just one attribute, such as skin color, but several overlapping attributes like gender, age and race.

Businesses, politics and many other sectors are increasingly relying on artificial intelligence and making far-reaching decisions for individuals and society based on that. “On the one hand, this opens up enormous opportunities for different sectors such as education, banking or healthcare, as well as on a personal level, for example, in job applications or ad targeting. On the other hand, artificial intelligence (AI) in particular runs the risk of further enforcing discrimination against minorities and marginalized groups in the population – based on so-called protected attributes such as gender, race and age,” explains Fariba Karimi, senior scientist at Computational Social Science at CSH Vienna.

By doing so, AI systems reinforce already existing biases and also contribute to the development of new, unknown types of discrimination – so-called black-box biases – instead of using their potential to compensate for inequalities.

Capturing discrimination through multiple attributes

Over the course of the three-year project, Karimi and her team will develop fairness measurements that go beyond single protected attributes such as gender or skin color. “For example, we want our fairness measurements to work not only for women, but also for women who are immigrants or from disadvantaged ethnic groups,” she says. Her focus is on precisely this multi-criteria fairness in network data – when, for example, inequalities result from people occupying different positions in the context of an underlying network.

Minorities should not become invisible in algorithms

Discrimination is not a new problem. But the fact that AI technology continues to reinforce discrimination has led to the rise of fairness-aware machine learning (ML) as part of responsible AI. This aims to develop machine learning models that, on the one hand, perform well in terms of prediction, but on the other hand, do not discriminate with respect to protected attributes such as gender or race. „Much effort has been made, but so far the proposed methods have limited impact and do not reflect the complexity and requirements of real-world applications,“ Karimi states.

Twelve institutes pull together

The new EU project, MAMMOth, aims to change that: In it, experts from twelve different institutions are developing an innovative, fairness-aware, AI data-driven foundation that provides the necessary tools to mitigate discrimination and multi-discrimination and ensure accountability of AI systems. “Through designing and implementing multi-criteria fairness measures and mitigations we want to ensure that minorities do not face any visibility issues in algorithms and treated fairly in sectors that rely on machines in decision-making processes. More fair algorithms means better representation and diversity in society which also means more inclusive and just societies“, Karimi said.

To this end, the project will actively target numerous communities of vulnerable and/or underrepresented groups in AI research from the outset. The so-called co-creation approach will ensure that the real needs and hardships of users are at the heart of the research agenda and guide the project’s activities. The solutions developed will then be demonstrated in pilot projects in three relevant areas (finance/loan applications, identity verification systems and academic evaluation).

MAMMOth = Multi-Attribute, Multimodal Bias Mitigation in AI Systems

Researchers

Related

0 Pages 0 Press 0 News 0 Events 0 Projects 0 Publications 0 Person 0 Visualisation 0 Art

Signup

CSH Newsletter

Choose your preference
   
Data Protection*