Skip to main content

Allen School researchers recognized at CHI 2026 for multiple projects at the intersection of AI and HCI

At the recent ACM CHI Conference on Human Factors in Computing Systems (CHI 2026), Allen School researchers brought home multiple accolades for their innovative work in human-computer interaction (HCI) and artificial intelligence. Their projects ranged from interactive systems that allow users to collaborate with AI agents with more flexibility, to an AI-based tool that helps screen-reader users make sense of geovisualizations, to a method for customizing LLM outputs based on user objectives — and much more.

Best Paper Award: Cocoa

As AI agents take on more complex tasks that require sophisticated planning and execution, such as writing research reviews or analyzing complex documents, there is a need for more users to work together with AI to tackle these problems. However, existing agentic research tools only focus on supporting human-AI collaboration either before or after task execution. 

A team of researchers including Allen School professor Amy X. Zhang introduced Cocoa, an interactive system that enables scientific researchers to co-plan and co-execute alongside AI agents in a document editor to tackle open questions and tasks within their research projects.  With Cocoa, users and AI agents can jointly complete plan steps and then re-execute steps as desired — similar to executing code cells in a computational notebook.

Headshot of Allen School professor Amy X. Zhang
Amy X. Zhang

“In this work, we chose to really emphasize flexibility in designing an interface for working with a long-running AI agent so as to give users more control — they can switch from planning to execution and back again, and can also take over from the agent to alter plans or assist in execution at any step along the way,” said Zhang.

Based on a formative study about the needs of researchers who use AI to support their work, the team designed a user interface that provides flexible delegation between human and AI work. For example, in the interactive sidebar, users can edit the AI agent’s outputs and add any relevant papers that the agent did not find to help guide its expertise and feedback. The user can also use the step assignment toggle feature to assign low risk, but high effort tasks to the AI agent, such as searching for papers or expanding on preliminary ideas. Then, the user can reserve the more consequential tasks that require higher thinking for themselves, including identifying seed papers for the AI agent to explore more broadly and making connections between multiple papers. 

The team first evaluated Cocoa against a custom chat baseline in a within-subjects task-based lab study with 16 researchers. Participants noted that the system enabled greater steerability without sacrificing ease-of-use compared to the strong custom chat baseline. In a week-long field deployment study, seven participants integrated cocoa in their day-to-day research and found that the system was especially helpful for literature discovery and synthesis, 

Additional authors include Allen School professor emeritus Daniel Weld; University of Washington Human Centered Design & Engineering Ph.D. student Kevin Feng; University of Toronto Ph.D. student Kevin Pu; Matt Latzke, Pao Siangliulue, Jonathan Bragg and Joseph Chee Chang at the Allen Institute for Artificial Intelligence (Ai2); and Tal August, faculty at the University of Illinois Urbana-Champaign.

Read the full paper on Cocoa here

Best Paper Award: GeoVisA11y 

Geovisualizations, or interactive map visualizations, are powerful tools for understanding patterns and trends in spatial data, but they are inaccessible to screen-reader users. Even when accessibility features such as alt text and data tables are available, these features struggle to capture and interpret the full potential of complex geovisualizations.  

To address this gap, a team of researchers in the Allen School’s Makeability Lab introduced GeoVisA11y, an AI-based question-answering system that makes geovisualizations more accessible using natural language interaction. The system integrates geostatistical analysis with large language models (LLMs) to go beyond keyword-matching and rule-based approaches and make previously inaccessible analytical tasks possible.

Headshot of Chu Li
Chu Li

“We started by exploring how screen-reader users could access interactive maps, but what we found is that natural language interaction with geospatial data benefits everyone. Accessibility is never binary, and we should create tools that support people across a whole spectrum of spatial analysis abilities,” said Allen School Ph.D. student and lead author Chu Li, who is advised by Allen School professor and senior author Jon Froehlich.

GeoVisA11y is made up of two primary components. The first combines a screenreader-compatible user interface that includes an interactive map with an AI-based chat tool that can answer analytical, geospatial, visual and contextual questions. The team paired that with a custom question-answering pipeline that can transform natural language questions into geoanalytical responses and map interactions. For example, if a user asks if there is a pattern on the map, the system first runs a global Moran’s I test, which can detect significant patterns using spatial autocorrelation, and if autocorrelations exist, it then performs a local indicators of spatial association (LISA) analysis to identify specific  spatial clusters or outliers. These LISA outputs are then summarized by GPT and presented to the user along with representative examples.

The researchers evaluated GeoVisA11y through a series of user studies in which six screen-reader users and six sighted participants were asked  to complete various data analytics tasks. Both groups engaged with the geospatial data through GeoVisA11y with different, but complementary, strategies. The screen-reader users employed a combination of verbal queries and keyboard navigation, while the sighted users visually assessed the map first, then queried for specific details. Despite their varying approaches, both groups completed the tasks and identified similar patterns, showing that GeoVisA11y could effectively bridge accessibility gaps and create a shared understanding of geovisualizations. 

Additional authors include Allen School professor Jeffrey Heer, Ph.D. students Rock Yuren Pang and Arnavi Chheda-Kothary, undergraduate student Henok Assalif and alum Ather Sharif (Ph.D., ‘24).

Read the full paper on GeoVisA11y here

More CHI recognition

In addition to the two Best Paper Awards, Allen School authors received four honorable mentions at CHI 2026 for their research. 

A team of researchers in the Mobile Intelligence Lab led by Allen School professor Shyam Gollakota were recognized for VueBuds, the first system that incorporates small cameras into off-the-shelf wireless earbuds to allow users to chat with an AI model about the scene in front of them. For privacy, the prototype only takes still images that are stored and processed on the device, and which the user can delete immediately. On the topic of prototypes, user studies in HCI often evaluate whether a prototype is “better,” however, the perceived newness of technologies can influence users’ judgement and possibly performance. Allen School Ph.D. student Yumeng Ma and her collaborators were recognized for their paper quantifying this novelty bias

Heer along with colleagues at Stanford University received an honorable mention for introducing just-in-time objectives, a new method for automatically inducing AI objectives on the fly based on observing the user and their task. This approach allows the LLM to produce customized rather than generic outputs, from individual responses to generated software tools. Meanwhile, as conversational LLM interfaces become more commonplace in data analysis, the challenge becomes how can data workers easily go back, make sense of long analytical conversations and then communicate their insights to others. Allen School Ph.D. student Ken Gu and collaborators at Tableau Research were recognized for their paper investigating how data workers revisit these conversations and what kinds of tools can support that process.

Also at the conference, Allen School professor and alum Jon Froehlich (Ph.D., ‘11) collected a SIGCHI Societal Impact Award for his work tackling accessibility challenges through HCI and AI, while fellow alum Jeffrey Bigham (Ph.D., ‘09) was inducted into the SIGCHI Academy.

Read more about the UW presence at CHI 2026.