Date & Time:
February 20, 2025 2:00 pm – 3:00 pm
Location:
Crerar 390, 5730 S. Ellis Ave., Chicago, IL,
02/20/2025 02:00 PM 02/20/2025 03:00 PM America/Chicago Peter Hase (Anthropic)- AI Safety Through Interpretable and Controllable Language Models Crerar 390, 5730 S. Ellis Ave., Chicago, IL,

Abstract: The AI research community has become increasingly concerned about risks arising from capable AI systems, ranging from misuse of generative models to misalignment of agents. My research aims to address problems in AI safety by tackling key issues with the interpretability and controllability of large language models (LLMs). In this talk, I present research showing that we are well beyond the point of thinking of AI systems as “black boxes.” AI models, and LLMs especially, are more interpretable than ever. Advances in interpretability have enabled us to control model reasoning and update knowledge in LLMs, among other promising applications. My work has also highlighted challenges that must be solved for interpretability to continue progressing. Building from this point, I argue that we can explain LLM behavior in terms of “beliefs”, meaning that core knowledge about the world determines downstream behavior of models. Furthermore, model editing techniques provide a toolkit for intervening on beliefs in LLMs in order to test theories about their behavior. By better understanding beliefs in LLMs and developing robust methods for controlling their behavior, we will create a scientific foundation for building powerful and safe AI systems.

Speakers

Peter Hase

Resident AI Researcher, Anthropic

Peter Hase is an AI Resident at Anthropic. He recently completed his PhD at the University of North Carolina at Chapel Hill, advised by Mohit Bansal. His research focuses on NLP and AI Safety, with the goal of explaining and controlling the behavior of machine learning models. He is a recipient of a Google PhD Fellowship and before that a Royster PhD Fellowship. While at UNC, he also worked at Meta, Google, and the Allen Institute for AI.

Related News & Events

headshots
UChicago CS News

University of Chicago Wins Distinguished Laude Institute Moonshots Seed Grant

Apr 15, 2026
collage
UChicago CS News

Incredible Showing of UChicago CS Researchers to CHI 2026

Apr 10, 2026
ai cartoon
UChicago CS News

What If AI Scientists Could Talk to Each Other?

Apr 06, 2026
person using embodied AI to open a window
UChicago CS News

When AI Meets Muscle: Context-Aware Electrical Stimulation Promises a New Way to Guide Human Movements

Apr 03, 2026
graphic
UChicago CS News

UChicago Researchers Build a Tool to Help Fix Peer Review

Apr 02, 2026
iccc team photo
UChicago CS News

UChicago CS Team Qualified for 2026 ICPC World Final Championships in Dubai

Apr 01, 2026
AI wedding photos
UChicago CS News

Mapping the New Rules of “AI Slop”: How Social Media Platforms are Managing AI-Generated Content

Mar 23, 2026
robot
UChicago CS News

How Chicago Robot Tutors Are Teaching SEL Effectively–Without Pretending to Be Human

Mar 19, 2026
screen grab
UChicago CS News

Could AI Help Us Be More Thoughtful Voters?

Mar 17, 2026
nano carbons
In the News

Nanodiamonds and Beyond: Designing Carbon Materials with Artificial Intelligence at Exascale

Mar 16, 2026
headshot
UChicago CS News

Michael Franklin Named Deputy Dean for Computational and Mathematical Sciences

Mar 16, 2026
UChicago CS News

AI Initiative Shares UChicago’s Vision for AI-Empowered Interdisciplinary Research

Mar 16, 2026
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube