Bias Assessment in Large Language Models

EVENT START DATE
27 February 2025 12:00
EVENT END DATE
27 February 2025 12:00
EVENT TYPE
Seminar
EVENT WHERE ?
Online
Bias Assessment in Large Language Models
Bias Assessment in Large Language Models

Topic: Bias Assesment in Large Language Models: Evaluating Generation and Decision-Making Bias

Speaker: Zekun Wu, Holistic AI & UCL

This talk will explore bias assessment in large language models (LLMs), focusing on evaluating bias in both text generation and decision-making tasks. We will present our latest research, including methodologies for benchmarking and mitigating bias in LLM-generated outputs and decision-based applications. Specifically, we will discuss findings from SAGED, a holistic bias-benchmarking pipeline, HEARTS, a framework for stereotype detection, and an evaluation of bias in metric models for open-ended text generation. Additionally, we will introduce JobFair, a benchmarking framework for assessing gender bias in hiring scenarios. Our ongoing collaboration with UCL will also be highlighted as a case study in bias assessment research.

We look forward to seeing you there!