UChicago CS Researchers Expand the Boundaries of Interface Technology at UIST 2025
At this year’s ACM Symposium on User Interface Software and Technology (UIST)—one of the global hubs for breakthroughs in how people and machines connect—the University of Chicago’s Department of Computer Science stands out through a spectrum of ambitious research. Faculty and students are contributing six full papers with presentations, two workshops focused on future user studies and sensory augmentation, and a bold vision talk that reimagines human-computer integration.
From balloon-powered mid-air displays to advanced brain stimulation and the hidden effects of virtual reality, the work heading to UIST 2025 highlights both scientific novelty and tangible potential in everyday interfaces. Each project speaks not only to fellow researchers, but to anyone interested in the subtle ways technology could reshape our bodies, our actions, and our communal spaces.
Here’s a look at the papers, demos, workshops, and vision that will represent UChicago CS at this year’s conference:
PAPER PROGRAMS
*UChicago authors in bold
Buoyancé: Reeling Helium-Inflated Balloons with Mobile Robots on the Ground for Mid-Air Tangible Display, Interaction, and Assembly
Alan Pham, Yuxiao Li, Miyu Fukuoka (University of Electro-Communications), Ken Nakagaki
AxLab – [program link]
- This paper introduces a creative way to interact with physical space using mobile robots to control helium-inflated balloons at impressive heights. These “ReelBots” move balloons to display data, reconfigure lights or cameras, and even assemble structures in real time. By combining motion tracking and flexible control options—including gestures—this system opens new opportunities for dynamic visualization and spatial interaction in settings from classrooms to galleries.
Shape n’ Swarm: Hands-on, Shape-aware Generative Authoring with Swarm UI and LLMs
Matthew Jeung, Anup Sathya, Wanli Qian, Steven Arellano, Luke Jimenez, Ken Nakagaki
AxLab – [program link]
- Shape n’ Swarm presents a new way for users to create motion and interactions with tabletop robots—no coding needed. Through direct hand arrangement and spoken instructions, users guide the system to interpret shapes, animate movements, and build interactive responses with the help of multiple script-generating AI agents. Tested with participants, this approach makes it much easier to experiment with physical animation and generative interfaces, previewing hands-on creative tools for classrooms, makerspaces, or collaborative design settings.
Ragged Blocks: Rendering Structured Text with Style
Sam Cohen, Ravi Chugh
Interactive Demo – [program link]
- Ragged Blocks tackles a long-standing challenge in visualizing structured text—like code or prose—by introducing a new way to nest stylish decorations without disrupting the natural layout of the text. The team’s layout algorithm creates “ragged blocks,” compact polygons that allow borders and padding to highlight structure while keeping information readable and organized. Tested on a range of programming languages, the approach produces layouts far more compact—and visually effective—than traditional methods, pointing to clearer, smarter tools for developers and writers.
VR Side-Effects: Memory & Proprioceptive Discrepancies After Leaving Virtual Reality
Antonin Cheymol, Pedro Lopes
Human-Computer Integration Lab – [program link]
- While VR can quickly reshape how we perceive and interact with the virtual world, this research asks what happens when users return to reality. The paper reveals that after VR sessions, people may experience lingering side-effects: their sense of hand position can be distorted, and memories of real and virtual object locations can blur together. With evidence from two studies, the findings spotlight important safety and usability risks for everyday and professional VR use—encouraging designers and researchers to take aftereffects seriously as immersive systems become more common.
Primed Action: Preserving Agency while Accelerating Reaction Time via Subthreshold Brain Stimulation
Yudai Tanaka, Hunter Mathews, Pedro Lopes
Human-Computer Integration Lab – [program link]
- This paper explores how subtle brain stimulation can help users react faster—without forcing involuntary movements. By gently “priming” motor cortex neurons below the threshold of actual motion, the technique gives a speed boost while preserving users’ sense of agency and control. Test results suggest this approach has real potential for applications like VR training and interactive sports, paving the way for haptic assistance that empowers rather than disrupts.
Vestibular Stimulation Enhances Hand Redirection
Kensuke Katori, Yudai Tanaka, Yoichi Ochiai (University of Tsukuba), Pedro Lopes
Human-Computer Integration Lab – [program link]
- This research reveals that stimulating our sense of balance—the vestibular system—can make virtual reality hand movements feel more natural and less noticeable. By aligning subtle body sways with expected actions in VR, the technique allows users’ hands to be redirected by much greater degrees without detection. In practical terms, this could help designers compress VR spaces into smaller physical areas, making immersive environments more flexible and comfortable for users.
OTHER PROGRAMS
Vision Talk: What if the “I” in HCI stands for Integration?
Speaker: Pedro Lopes
[Talk]
- This vision talk invites the UIST community to rethink Human-Computer Interaction—shifting focus from “interaction” to “integration.” Highlighting trends from desktops to wearables, Pedro Lopes explores how future AI interfaces could support us through direct integration with the body, such as brain or muscle stimulation, without undermining our sense of control. The discussion challenges researchers and designers to imagine technologies that teach and augment—rather than automate—while emphasizing the need to preserve user agency and carefully consider societal impacts.
Workshop: Facilitating Longitudinal Interaction Studies of AI Systems
Presenters: Anup Sathya, Ken Nakagaki, et al.
[workshop link]
- Understanding how users interact with AI over time is crucial—one-off studies often miss the real picture of learning and adaptation. This workshop brings together researchers to tackle the practical challenges of running longer-term, real-world evaluations, offering keynotes, panels, and hands-on sessions for designing studies and building new tools. Its goal: to spark a community focused on deeper, more meaningful assessments of AI and user interface technologies.
Workshop: Toward Everyday Perceptual and Physiological Augmentation
Presenters: Yujie Tao, Jas Brooks, Pedro Lopes, et al.
[workshop link]
- As devices like smart glasses and wearable bands become part of everyday life, opportunities emerge to actively enhance our senses and bodily experiences—far beyond traditional assistive tech. This workshop explores how multisensory stimulation can enrich entertainment, well-being, and daily interaction, while also tackling the challenges of moving from prototype to real-world use. Attendees will exchange ideas and devices through keynotes, demos, and hands-on activities, aiming to lay a foundation for future, long-term augmentation systems.
From Integration To Impact
The University of Chicago researchers at UIST 2025 showcase a remarkable breadth of inquiry—spanning tangible interactive systems, longitudinal studies of AI, advanced sensory augmentation, and a forward-looking vision for human-computer integration. Their work points toward interfaces that don’t just respond to our commands, but become dynamic collaborators and extensions of our minds and bodies. As this research moves from the lab into real-world contexts such as classrooms, healthcare, entertainment, and daily routines, it opens up important questions about agency, well-being, and the social impacts of technology.
Across papers, workshops, and visionary talks, the message is clear: the future of human-computer interfaces will be shaped by thoughtful integration—putting the needs and experiences of users at the heart of every innovation.