Event

Multilingual and Multicultural Misrepresentation in LLM Simulations of People

22 August 2024
Expired!
11:00 am - 12:00 pm

Location

Room 201

Organizer

Complexity Science Hub
Email
events@csh.ac.at
  • Attendance: on site
  • Language: EN

Event

Multilingual and Multicultural Misrepresentation in LLM Simulations of People

Social simulation presents an intriguing and potentially revolutionary use case for Large Language Models (LLMs). Some researchers suggest that human biases encoded in LLMs, due to the data they’ve been trained on, can be exploited to mimic people with greater fidelity. Promising results have been found in using LLMs to simulate survey respondents, annotators, and even more complex simulations of groups of people. However, under which circumstances are the biases in LLMs true mirrors or people, and when are they distortions or exaggerations? To shed light on this question, I will present three case studies — 1) using LLMs to simulate data annotators, 2) using LLMs to generate training data for Machine Learning Models, and 3) using LLMs to simulate moral reasoning. Each of these studies shows the limitations of current LLM technology in faithfully and realistically simulating people due to an inability to represent minority and marginalized subgroups, especially for languages beyond English. 

RSVP

Speaker(s)

Indira Sen

0 Pages 0 Press 0 News 0 Events 0 Projects 0 Publications 0 Person 0 Visualisation 0 Art

Signup

CSH Newsletter

Choose your preference
   
Data Protection*