Should We Use Language Models In High-Stakes Decision-Making Scenarios? | Max Lamparth

Tuesday, April 16, 2024
1:00 PM - 2:00 PM
(Pacific)

William J. Perry Conference Room

About the Event: The emergence of generative language models, such as the one powering ChatGPT, has sparked widespread interest due to its potential implications for the future of work and society at large. The drive to automate decision-making is reaching high-stakes applications like military applications and mental health care, where non-zero error rates lead to individual failures with dire consequences and the potential to cause wide-scale harm. Thus, it is time to scrutinize and ask whether we should use language models in high-stakes decision-making scenarios. In this talk, I will dissect the question by studying how human decision-making differs from language models, if language models add their own dynamics to conflict situations, and whether they can recognize emergency/high-stakes user queries. 
 
About the Speaker: Max Lamparth is a postdoctoral fellow at the Center for International Security and Cooperation, the Stanford Center for AI Safety, and the Stanford Existential Risks Initiative at Stanford University. He is advised by Prof. Steve Luby, Prof. Paul Edwards, and Prof. Clark Barrett.

With his research, he wants to make AI systems more secure and safe to use to avoid individual and wide-scale harm. Specifically, he is focusing on how to improve the robustness and alignment of language models, how to make their inner workings more interpretable, and how to reduce the potential for misuses.

Max received his Ph.D. in August 2023 from the Technical University of Munich and previously a B.Sc. and M.Sc. from the Ruprecht Karl University of Heidelberg.

 All CISAC events are scheduled using the Pacific Time Zone.