Human-Algorithm Decision-Making Under Imperfect Proxy Labels

29 February 2024
2:00 pm - 3:00 pm


Room 201


Complexity Science Hub
  • Attendance: online only
  • Language: EN


Human-Algorithm Decision-Making Under Imperfect Proxy Labels

Across domains such as medicine, employment, and social media, predictive models often target labels that imperfectly reflect the outcomes of interest to experts and policymakers.

For example, clinical risk assessments deployed to inform physician decision-making often predict measures of healthcare utilization (e.g., costs, hospitalization) as a proxy for patient medical needs.

These proxies can be subject to outcome measurement error when they systematically differ from the target outcome they are intended to measure.

In this talk, we discuss five sources of target variable bias that can impact the validity of proxy labels in human algorithm decision-making tasks. We develop a causal framework to disentangle the relationship between each bias and clarify which are of concern in specific decision-making settings.

We first leverage our framework to re-examine the designs of prior human subject experiments that investigate human algorithm decision-making. We find that only a small fraction examine factors related to target variable bias. Next, we propose an algorithmic technique that, given knowledge of proxy measurement error properties, corrects for the combined effects of these challenges.

We demonstrate the utility of our approach via experiments on real-world data from randomized controlled trials conducted in healthcare and employment domains. Our work underscores the importance of considering intersectional threats to model validity while designing and evaluating human-algorithm decision-making workflows. We conclude by discussing the implications of imperfect proxy labels on efforts to measure and mitigate network inequality adequately.



Luke Guerdan

0 Pages 0 Press 0 News 0 Events 0 Projects 0 Publications 0 Person 0 Visualisation 0 Art


CSH Newsletter

Choose your preference
Data Protection*