Lisette Espín-Noboa_Spotlight On - interview series

News

Spotlight On: Lisette Espín-Noboa

Welcome to our interview series with a twist — where researchers choose from a pool of questions which ones they want to answer. It’s a chance to get to know them and their work from a more personal angle.

Today in the spotlight: Lisette Espín-Noboa who is a PostDoc in the CSH Algorithmic Fairness group. She shares why she’s rethinking what “fair” really means, how she hopes to shape hiring and AI practices—and which historical icons she’d love to brainstorm with.

Complexity is …

… the beauty of life. We’re all interconnected, living within systems that are far from simple. But, it’s in that complexity that we find diversity, meaning, and what makes us who we are.

What are you currently working on, and why is it exciting to you?

I’m working on redefining fairness in social decision-making, especially in cases where bias stems not only from demographics like gender or ethnicity, but also from personal networks and social capital.

This work is exciting because the problem is complex. While many definitions of fairness already exist, most overlook the role of social connections. Yet, if you think about it, a large part of decision-making is influenced by who you know.

What makes this project even more engaging is its multidisciplinary nature. I’m collaborating with computer scientists, social scientists, philosophers, ethicists, and AI ethics researchers. These different perspectives help me approach the problem in a more complete and thoughtful way.

If your research were a movie, what would its title be?

Tell Me Who You Hang Out With, and I’ll Tell You Your Fate.

What problem would you like to have solved in 10 years?

In ten years, I hope we have made meaningful progress toward fairness in hiring, academia, and the development of equitable and reliable language models. The first two are closely related to my current work (as mentioned earlier), where I often notice contradictions between the intention to build fair systems and the reality of how bias comes in through social connections. While building social capital is important for career growth, depending too much on it creates environments where connections matter more than merit. This pushes us toward systems where influence replaces ability.

The challenge is that not everyone starts with the same chances to build strong networks. A person from a marginalized background, no matter how determined, cannot easily match the built-in advantages of someone born into privilege. This raises difficult questions: Should we adjust our systems to recognize these imbalances, even if that seems unfair to others? Or should we focus on helping people grow their social capital, so future decisions are based on comparisons between people with more equal opportunities?

As for large language models, they still reflect and even strengthen common biases around gender, ethnicity, and popularity. Their unreliability, especially when they make up information, creates serious risks in education and public knowledge. I hope that in the next decade, we create models that are not only more fair and accurate, but also more environmentally sustainable. The energy required to train them is massive, and we need solutions that reduce that impact.

What brought you to science in the first place, and how did you end up at CSH?

I came to science through my master’s thesis. I moved to Europe with a scholarship from my home country, Ecuador, originally planning to return after completing my degree. But then I joined the Max Planck Institute for Software Systems in Saarbrücken, Germany, where I started working with Krishna Gummadi as my supervisor, and Juhi Kulshrestha, who was a PhD student at the time and also mentored me. They were both great mentors, and I learned a lot from them and from the research environment at MPI. 

My research there focused on analyzing Twitter data without relying on text, but instead using social network analysis. That made me realize how much we can learn about human behavior through these dynamics, and I knew I wanted to keep working on this topic. It was a natural transition. In Ecuador, I earned an engineering degree in computer science and was mostly interested in building user-friendly software. During my master’s, I became fascinated by the power of social media data and the human patterns it reveals. In my PhD, I focused on algorithmic behavior, especially bias. As a postdoc, I continue working on algorithms, but now from a more ethical and sociological perspective, because there is still much to uncover and improve. 

I ended up at CSH because there was a postdoc opening on the “Humanized Algorithms” WWTF project lead by Fariba Karimi, which aligned perfectly with my interests: understanding and reducing algorithmic bias. To be honest, I hadn’t heard of CSH before, but after joining I realized it’s a well-known and highly respected place, especially among physicists and complexity scientists. Now that I’m here, I really see it as a hub. We have many resident researchers with diverse backgrounds, working on everything from supply chains and crypto decision-making to biodiversity, crime, and algorithmic fairness. The team leaders are also well established and connected internationally. It’s a great place to grow and build a career in complexity science.

Which tools or methods do you use most in your work?

Most of my research focuses on auditing algorithms to understand when and why they produce biased or unfair outcomes. I use a data-driven approach, combining both real and synthetic data. For real data, I work with social networks such as academic collaboration or citation networks, and retweet networks from platforms like Twitter. For synthetic data, I use network models that generate realistic structures, allowing me to explore what-if scenarios. These simulations help me measure bias and fairness in algorithmic outputs while controlling the properties of the network. This can also be viewed as a form of agent-based modeling. 

The algorithms I typically work with operate on network data, including node classification, node ranking, and link prediction. More recently, I have extended my work to include large language models, auditing their behavior in expert recommendation tasks to quantify biases. 

In another project, I compared the performance of complex and classic AI models in poverty prediction. The goal was to understand which models best identify the rich, the poor, or the middle class across urban and rural areas. This involved using deep learning models trained on satellite imagery, as well as classic regression models that relied on crowdsourced data from open sources such as OpenStreetMap and Meta’s Data for Good.

What’s the most unexpected place your work has taken you—intellectually or geographically?

Intellectually, the most unexpected place has been exploring research from well before I was born, especially in sociology. I’ve found deep connections between my work and the ideas of scholars like Putnam, Coleman, Bourdieu, and Merton, particularly on social capital and its influence on society. Their theories continue to shape how I think about fairness and inequality in algorithmic systems.

Geographically, my work has taken me as far as Perth to attend TheWebConf in 2017, and to research visits at Stanford, the University of Southern California, and the Santa Fe Institute. Each place gave me new perspectives and collaborations that shaped how I approach my research today.

If you could invite any historical figure to a research meeting at CSH, who would it be—and what would you talk about?

I would invite J. Robert Oppenheimer and Alexander Fleming.

Today, there is an ongoing debate about how to deploy algorithms responsibly. Should we release them first and fix the harms later, or should we be more cautious from the start and design systems with responsibility built in?

Whenever I’m in these discussions, I think of the atomic bomb and penicillin. Without constraints, we can push the boundaries of discovery and achieve breakthroughs like penicillin. But with no oversight, we also risk outcomes like the atomic bomb. I’d want to hear their perspectives: what they learned from their work, and how they think scientific responsibility should be handled when discoveries have such wide-reaching consequences.

What’s one concept in complexity science you think everyone should know—and why?

I think this is very subjective. My answer is definitely influenced by what I work on and what I find important. For me, basic graph theory is essential. It’s at the core of almost any complexity science project, whether applied or not. Also, having a good handle on math and basic programming really helps. If you’re comfortable with both, you can explore ideas more freely and test things on your own.

When did you last change your mind about something important in your work?

I’m not sure I’ve changed my mind recently about something specific in my research, but I have changed my view on work and success in general.

I used to believe that being a hard worker was all it took to succeed (to some extent). That might be true when you’re young, but over time I’ve realized it’s not the only ingredient. Being efficient and strategic is often more effective. I’ve learned the value of working smarter: dividing tasks, collaborating, and learning from others. Collaboration isn’t just about getting help, it’s also about gaining new perspectives.

A small anecdote: when TikTok became popular before the pandemic, I completely dismissed it. I thought it was just for kids and didn’t take it seriously. But then a senior colleague showed genuine excitement about it and all the creative possibilities it offered. That moment made me reflect on how easy it is to become resistant to change as we get older. Since then, I’ve tried to stay open to new tools and trends, even if they seem outside my comfort zone. It’s a good reminder not to fall behind just because something feels unfamiliar.

What’s one research result that genuinely surprised you?

One research result that really surprised me came from our work on biases in ranking algorithms. We found two key things. First, being a minority doesn’t always mean being underrepresented. In sociotechnical systems where people are interconnected, minorities tend to be discriminated against only when they are also poorly connected. This suggests that quotas alone may not solve representation issues in top rankings.

Second, in our simulations, we saw that even when we artificially make the system fifty-fifty in terms of group size, if we do not change how people are integrated into the network, the minority group still struggles to gain visibility. Follow-up work by Neuhauser et al. (2022) supports this finding. Even when the minority becomes the numerical majority, unequal integration still results in poor representation. This means that interventions should not focus only on quotas to improve representation in lower ranks, but also on improving integration so that minorities have a fair chance to reach higher ranks.

What’s something in your daily routine that might surprise your colleagues?

Since March, I’ve been a visiting professor at TU Graz, and since then, I’ve gotten into a new routine; doing one hour of Zumba around 6:30 a.m., then walking to and from the university. It’s been great not just for staying active, but also for clearing my mind. Sometimes it even helps me come up with ideas for my projects.

At night, my husband and I have this little ritual: we play a couple of the New York Times games together like Connections, Strands, and Wordle. It’s our way to relax, have fun, and disconnect from work for a bit.

What do you enjoy doing when you're not thinking about complexity?

Outside my usual Zumba hours, I enjoy freestyle dancing to Latino music from time to time (especially after a big deadline, or a stressful week). It’s one of my favorite ways to relax and reconnect with myself and my roots.

Another of my favorite things is making a Negroni à la Lisette for friends who visit. It’s my own twist on the classic: gin, vermouth rosso, Campari, white wine, and Indian tonic water. It’s become a bit of a signature at our place.

What’s your favorite place in Vienna so far?

My home, and many restaurants.

What’s your go-to strategy when you’re stuck on a problem?

For technical problems, I usually try to solve them on my own first: searching online or going through documentation. If that doesn’t work, I turn to my husband, who’s also in tech, or ask my colleagues. Talking it through often helps me see the issue more clearly, especially with my husband, who tends to be very critical and is outside my research bubble. That outside perspective can really help break a mental block.

For interpersonal problems, I have a close friend with whom we regularly exchange support, and my husband is always there to listen. Even when they don’t have the answer, just being heard makes a big difference and helps me think more clearly.

Researchers

Related

0 Pages 0 Press 0 News 0 Events 0 Projects 0 Publications 0 Person 0 Visualisation 0 Art

Signup

CSH Newsletter

Choose your preference
   
Data Protection*