AI Fariba Karimi Daniel Kondor

News

Experts Urge Complex Systems Approach to Assess AI Risks

The social context and its complex interactions must be considered and public engagement must be encouraged

With artificial intelligence increasingly permeating every aspect of our lives, experts are becoming more and more concerned about its dangers. In some cases, the risks are pressing, in others they won’t emerge until many months or even years from now. Scientists point out in The Royal Society’s journal that a coherent approach to understanding these threats is still elusive. They call for a complex systems perspective to better assess and mitigate these risks, particularly in light of long-term uncertainties and complex interactions between AI and society.

“Understanding the risks of AI requires recognizing the intricate interplay between technology and society. It’s about navigating the complex, co-evolving systems that shape our decisions and behaviors,” says Fariba Karimi, co-author of the article. Karimi leads the research team on Algorithmic Fairness at the Complexity Science Hub (CSH) and is professor of Social Data Science at TU Graz.

“We should not only discuss what technologies to deploy and how, but also how to adapt the social context to capitalize on positive possibilities. AI possibilities and risks should likely be taken into account in debates about, for instance, economic policy,” adds CSH scientist Dániel Kondor, first author of the study.

Broader and Long-Term Risks

Current risk assessment frameworks often focus on immediate, specific harms, such as bias and safety concerns, according to the authors of the article published in Philosophical Transactions A. “These frameworks often overlook broader, long-term systemic risks that could emerge from the widespread deployment of AI technologies and their interaction with the social context they are used,” says Kondor.

“In this paper, we tried to balance the short-term perspectives on algorithms with long-term views of how these technologies affect society. It’s about making sense of both the immediate and systemic consequences of AI,” adds Kondor.

What Happens in Real Life

As a case study to illustrate the potential risks of AI technologies, the scientists discuss how a predictive algorithm was used during the Covid-19 pandemic in the UK for school exams. The new solution was “presumed to be more objective and thus fairer [than asking teachers to predict their students’ performance], relying on a statistical analysis of students’ performance in previous years,” according to the study.

However, when the algorithm was put into practice, several issues emerged. “Once the grading algorithm was applied, inequities became glaringly obvious,” observes Valerie Hafez, an independent researcher and study co-author. “Pupils from disadvantaged communities bore the brunt of the futile effort to counter grading inflation, but even overall, 40% of students received lower marks than they would have reasonably expected.”

Hafez reports that many responses in the consultation report indicate that the risk perceived as significant by teachers—the long-term effect of grading lower than deserved—was different from the risk perceived by the designers of the algorithm. The latter were concerned about grade inflation, the resulting pressure on higher education, and a lack of trust in students’ actual abilities.

AI Fariba Karimi Daniel Kondor
An illustration of the complex system of AI-infused social networks, and a model of amplification of biases in networks A: an illustration of the complex system of AI-infused social networks. B: a model of amplification of biases in networks due to feedback between algorithm and human decision over time. B (i), Simulations are performed on a synthetic network of size 2000 with 30% nodes being minorities and minority homophily and majority homophily are set to 0.7 each. In homophilic networks minorities are less presented in the top ranks compared to their size, 30%. In each time step, 5 random links are removed from the network. They are then rewired, with higher-ranked nodes having a greater probability of being chosen as targets for the new connection. The ranking is recalculated and this process of link rewiring is repeated over many feedback loops. The fraction of minorities in the upper ranks (eg: the top 10%) reduces over time. The results are averaged over 10 independent experiments. In part (ii), we can observe that the fraction of minorities in the top 10% goes from 23.4% down to 21.8% in 60 iterations. Part (iii) measures demographic parity where demonstrates how far in the rank 30% of minorities are present as we expect, all else equal. At the beginning of the process, we achieve 30% representation of the minorities when we arrive at the top 56% of the nodes. At the end of the process, we need to include the top 71% to get this fair representation.

The Scale and the Scope

This case demonstrates several important issues that arise when deploying large-scale algorithmic solutions, emphasize the scientists. “One thing we believe one should be attentive to is the scale—and scope—because algorithms scale: they travel well from one context to the next, even though these contexts may be vastly different. The original context of creation does not simply disappear, rather it is superimposed on all these other contexts,” explains Hafez.

“Long-term risks are not the linear combination of short-term risks. They can escalate exponentially over time. However, with computational models and simulations, we can provide practical insights to better assess these dynamic risks,” adds Karimi.

Computational Models – and Public Participation

This is one of the directions proposed by the scientists for understanding and evaluating risk associated with AI technologies, both in the short- and long-term. “Computational models—like those assessing the effect of AI on minority representation in social networks—can demonstrate how biases in AI systems lead to feedback loops that reinforce societal inequalities,” explains Kondor. Such models can be used to simulate potential risks, offering insights that are difficult to glean from traditional assessment methods.

In addition, the study’s authors emphasize the importance of involving laypeople and experts from various fields in the risk assessment process. Competency groups—small, heterogeneous teams that bring together varied perspectives—can be a key tool for fostering democratic participation and ensuring that risk assessments are informed by those most affected by AI technologies.

“A more general issue is the promotion of social resilience, which will help AI-related debates and decision-making function better and avoid pitfalls. In turn, social resilience may depend on many questions unrelated (or at least not directly related) to artificial intelligence,” ponders Kondor. Increasing participatory forms of decision-making can be one important component of raising resilience.

“I think that once you begin to see AI systems as sociotechnical, you cannot separate the people affected by the AI systems from the ‘technical’ aspects. Separating them from the AI system takes away their possibility to shape the infrastructures of classification imposed on them, denying affected persons the power to share in creating worlds attenuated to their needs,” says Hafez, who’s an AI policy officer at the Austrian Federal Chancellery. * 

About the study

The study “Complex systems perspective in assessing risks in AI,” by Dániel Kondor, Valerie Hafez, Sudhang Shankar, Rania Wazir, and Fariba Karimi was published in Philosophical Transactions A and is available online.

* Valerie Hafez is a policy officer at the Austrian Federal Chancellery, but conducted this research independently. The views expressed in the paper do not necessarily reflect the views or positions of the Federal Chancellery.

Researchers

Related

13.11.2024
D. Kondor, V. Hafez, S. Shankar, R. Wazir, F. Karimi,
Philosophical Transactions of the Royal Society A: Mathematical Physical and Engineering Sciences
0 Pages 0 Press 0 News 0 Events 0 Projects 0 Publications 0 Person 0 Visualisation 0 Art

Signup

CSH Newsletter

Choose your preference
   
Data Protection*