As organizations increasingly rely on algorithms to rank candidates for jobs, university spots, and financial services, hyperFA*IR offers a more principled approach when picking candidates based on a limited pool of applicants, especially if minorities are few. The new interactive visualization, ‘Ranks of Disparity,’ makes these complex dynamics visible.
THE STUDY IN A NUTSHELL
The problem: Traditional fairness algorithms treat candidate selection as a series of independent events (like coin flips). This ignores the fact that in finite pools, every selection changes the odds for the remaining candidates.
The solution: hyperFA*IR utilizes a hypergeometric distribution model. It dynamically adjusts selection probabilities as candidates are drawn from the pool, reflecting the real-world math of sampling without replacement.
The impact: The method ensures more accurate representation for underrepresented groups and helps institutions meet diversity goals through statistically grounded rankings rather than rigid quotas.
Interactive visualization: The interactive story “Ranks of Disparity” allows users to explore these dynamics firsthand using datasets from university admissions, job applications, and scholarship programs.
Imagine organizing a conference with 50 applicants competing for 20 spots, where 30% of applicants are women. You select five people, all men. What are the odds the next person selected is a woman – still 30%, or slightly higher now that five men are off the table?
Most existing tools to measure fairness miss this subtlety. Traditionally, they assume no randomness or treat each candidate as an independent “coin flip,” assigning the same probability to every selection. “This approach ignores the fact that in real-world selections, candidates are drawn from a limited pool, and each choice changes the odds for everyone else,” explains Mauritz Cartier van Dissel, from the Complexity Science Hub (CSH).
Together with his colleagues from CSH Algorithmic Fairness group and TU Graz, Cartier van Dissel developed hyperFA*IR, a new method that addresses this gap. By accounting for the changing probabilities as candidates are selected, hyperFA*IR provides a more realistic, statistically grounded way to ensure fairness in finite pools – the situation most common in hiring, university admissions, and loan applications.
“The method can also support affirmative action policies, helping institutions meet representation goals while respecting the actual structure of the candidate pool, rather than relying on rigid quotas,” adds Cartier van Dissel.
DRAWING CARDS FROM A DECK
“Existing AI fairness tools assume each selection is independent,” says Cartier van Dissel. “But in reality, when you’re drawing from a fixed pool, it’s more like drawing cards from a deck—once you pick one, it affects what’s left.”
The original FA*IR algorithm uses a model where selection probabilities remain static. If a pool is 70% men and 30% women, it treats every selection as having those exact odds, regardless of who’s already been chosen.
“With current fair ranking algorithms, if we first select five individuals and all happen to be male, the probability for the next selection remains 70% men and 30% women,” notes Cartier van Dissel. “But in reality, there are now five fewer men in the pool, so the probability of selecting a woman should increase.”
Interactive Tool: Exploring the Impact of Bias
How do disparities emerge in candidate selection processes? The interactive story “Ranks of Disparity” allows users to navigate a university admissions scenario, rank candidates, and learn how statistical methods are used to measure and ensure fairness. The tool also includes an “Explorer” mode to test these concepts across three different datasets: university admissions, job applicants, and scholarship recipients.
WHEN MINORITIES ARE FEW
The novel tool refines the original FA*IR algorithm and uses a hypergeometric distribution – a statistical model that accounts for sampling without replacement. As candidates are selected and removed from the pool, hyperFA*IR dynamically adjusts the probabilities for remaining candidates.
This matters most when selections are large relative to the pool size or when underrepresented groups are small. “If you’ve selected many people from one group already, the odds of selecting more from that group naturally decrease,” notes Cartier van Dissel, first author of the study published in the Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency.
FAIRNESS AT THE BOTTOM
Unlike tools that focus only on top-ranked positions, hyperFA*IR ensures fairness throughout entire rankings, including bottom positions. This matters in scenarios where lower-ranked candidates still benefit from waiting lists, fallback options, or secondary opportunities.
“In hiring pipelines or college admissions, candidates ranked lower may still benefit from waiting lists,” note the researchers. “In public service delivery or housing allocation, fairness at the bottom helps prevent systemic disadvantage and ensures equitable treatment for underrepresented groups.”
A NEW WAY TO SUPPORT AFFIRMATIVE ACTION
Traditional affirmative action policies often rely on strict quotas – for example, requiring that 40% of selected candidates be women. These fixed rules can be difficult to apply fairly, especially in small or uneven candidate pools. The hyperFA*IR tool offers a flexible, statistical alternative: it uses the target proportion to guide early selections, then dynamically adjusts probabilities as candidates are chosen, ensuring that representation goals align naturally with the remaining pool.
This adaptive approach reduces the risks associated with rigid quotas, such as claims of unfair treatment against other groups or reverse discrimination. With recent U.S. Supreme Court rulings limiting the use of fixed quotas in university admissions, approaches like hyperFA*IR provide a way to promote diversity and fair representation that is both practical and legitimate, while remaining transparent and easy to implement, according to the researchers.
REAL-WORLD IMPACT
“The tool addresses a fundamental limitation in how AI systems make fair selections from real-world candidate pools”, explains Fariba Karimi, professor at TU Graz and faculty member of the Complexity Science Hub (CSH).
The researchers are now working on a model that considers multiple groups simultaneously, not just two groups. “Making sure that rankings are ‘fair’ can be tricky,” stresses Cartier van Dissel. The reality is that humans are complex and have multiple attributes – such as race, gender, and age – and the challenge is to develop algorithms that ensure fairness with respect to all parts of their identity, according to Karimi and her group.
About the study & the visualization
The research article “hyperFA*IR: A hypergeometric approach to fair rankings with finite candidate pool” by Mauritz N. Cartier van Dissel, Samuel Martin-Gutierrez, Lisette Espín-Noboa, Ana María Jaramillo, Fariba Karimi, was presented at FAccT ’25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency.
The visualization “Ranks of Disparity” was published in April 2026.