As a computer systems researcher, Allen School professor and alum Ratul Mahajan (Ph.D., ‘05) has helped develop technologies powering the networks that support multiple aspects of modern society — from operating online banking accounts to scrolling social media.
The Association for Computing Machinery (ACM) recognized Mahajan among its 2025 class of ACM Fellows for his groundbreaking “contributions to network verification and network control systems and their transfer to industrial practice.” The ACM Fellows are selected by their peers and represent the top 1% of members who have provided notable technical innovations and/or service to the field of computing.
“I love developing systems with new operational paradigms that can bring about a step change in efficiency or performance and developing techniques that provide strong guarantees about robustness of large-scale systems,” said Mahajan, a member of the Allen School’s Computer Systems Lab and co-director of the UW Center for the Future of Cloud Infrastructure (FOCI). “And I doubly love it when my work has real-world impact and changes practice.”
Mahajan has helped make network verification techniques mainstream across both industry and academia. He introduced Batfish, an open source network configuration analysis tool that proactively ensures that planned network configuration changes operate as intended. His work on Batfish, one of the earliest and most widely-adopted network verification platforms, was recognized with the SIGCOMM Networking Systems Award. In collaboration with colleagues across academia and industry, Mahajan has developed several other control plane analysis technologies. Minesweeper is able to verify the accuracy of all combinations of external routing messages and failures; Bonsai speeds up analysis by leveraging symmetries within large networks; and ARC does that via specialized graph-based encodings.
The technology behind Batfish and his other methods were commercialized by Intentionet, where Mahajan was the co-founder and CEO until the company was later acquired by Amazon. Today, more than 75 companies worldwide rely on Intentionet’s technology to help design and test their networks.
In his other vein of work, Mahajan focuses on developing systems that allow for the direct control of large-scale infrastructure. Historically, the control of large-scale infrastructure has been indirect, requiring engineers to tweak local, low-level parameters for routers to generate global system behavior. However, as more and more cloud operators provide their own global infrastructure, indirect control can lead to poor efficiency and reliability. Before joining the Allen School faculty, Mahajan was at Microsoft, where he and his collaborators developed SWAN. That system increases the utilization of inter-datacenter networks by directly controlling the amount of traffic each service sends, as well as frequently reconfiguring the dataplane to match traffic demands. Prior to SWAN, using indirect control settings, the busiest links had an average utilization of between 40 to 60%; the switch to SWAN achieved an almost 100% utilization rate.
Mahajan has also extended direct control to the entire infrastructure for delivering online services. He and his team introduced the Footprint system, which leverages dynamics of an integrated setting to boost efficiency and performance. In simulations partially deploying Footprint in the Microsoft infrastructure, they found that it could carry at least 50% more traffic and reduce user delays by at least 30% compared to current methods. Mahajan also helped develop Statesman, a network-management service that enables multiple direct-control applications to safely operate over shared infrastructure.
“The centralized management and control of network infrastructure that Ratul developed transformed Microsoft’s cloud infrastructure,” said Victor Bahl, technical fellow and chief technical officer for Azure operations at the company. “SWAN now carries over 90% of the traffic in and out of Microsoft’s datacenters, a footprint spanning over 280,000 kilometers of fiber and over 150 points of presence across all Azure regions; Footprint has evolved into Microsoft’s content distribution network; and Statesman is deployed across most Microsoft datacenters.”
“At the Allen School, we have developed a sequence of seminars that introduce undergraduate students to research and give them the opportunity to work on a hands-on research project with a faculty or graduate student mentor,” said Allen School professor Leilani Battle, who co-chairs the undergraduate research committee alongside colleague Maya Cakmak.
“Through these research opportunities, students see a side of computer science that they may not encounter in their classes or internships such as learning how to identify new research problems or learning to draw connections between different areas of computer science,” Battle continued.
From developing policies to help robots learn to introducing methods to better understand the training data behind large language models (LLMs), these nationally recognized students have demonstrated how to take what they learned in the classroom and make a real-world difference.
Haoquan Fang: Enhancing the reasoning capabilities of robots
Haoquan Fang
Award winner Haoquan Fang aims to tackle one of the significant challenges in today’s embodied AI models — how to equip robots with robust, generalizable and interpretable reasoning abilities. In his work, Fang bridges these domains and paves the way for robots that can understand the world and act with purpose.
“I am broadly interested in robot learning. In particular, I focus on developing generalist robotic manipulation policies that leverage strong priors, by optimizing both the data curation and model architectures,” said Fang.
Fang proposed a new model that integrates memory into robotic manipulation. Alongside Ranjay Krishna, Fang spearheaded the development of SAM2Act, a multi-view robotic transformer-based policy that leverages the visual foundation model Segment Anything Model 2 (SAM2) to achieve state-of-the-art performance on existing benchmarks for robotic manipulation. He then built off that architecture to introduce SAM2Act+, which extends the SAM2Act system with memory-based components. This policy enables the agent to better predict actions based on past spatial information, thus enhancing performance in sequential decision-making tasks. Fang and his collaborators published this work at the 42nd International Conference on Machine Learning (ICML 2025), and received the Best Paper Award at the RemembeRL Workshop at the 9th Annual Conference on Robot Learning (CoRL 2025). Last year, his senior thesis on SAM2Act was awarded Best Senior Thesis Honorable Mention from the Allen School.
Fang also co-led the introduction of the first fully open action reasoning model for robotics with MolmoAct. The model, which was designed by a team of University of Washington and Allen Institute for Artificial Intelligence (Ai2) researchers, enables robots to interpret and understand instructions, sense their environment, generate spatial plans, and then execute them as goal-directed trajectories. Across various benchmarks, MolmoAct outperformed multiple competitive baselines, including NVIDIA’s GR00T N1.5. MolmoAct received the People’s Choice Award at the Allen School’s 2025 Research Showcase, as well as the Best Paper Award runner-up at the Rational Robots Workshop at CoRL 2025. The State of AI Report 2025 also highlighted MolmoAct for setting the standard of embodied reasoning, which was later adopted by Google Gemini Robotics 1.5.
Hao Xu: Making internet-scale corpora searchable
Hao Xu
Large language model behavior is shaped by their training data and tokenization. For Hao Xu, understanding the composition of these models’ training data is increasingly more important as the “data scales beyond what is practical to inspect.” Today’s LLM are trained on massive, Internet-scale text datasets, however, it is difficult to analyze and understand the quality and content of these corpora.
“My research interests lie in natural language processing with a focus on large language models. My future work aims to develop more efficient model-data interactions that move beyond today’s brute-force training paradigm,” said Xu. “As a violinist, I also view music as a distinct form of language and am interested in studying how it can be modeled and learned using language modeling techniques.”
Xu’s primary research focuses on bridging this gap. Alongside Allen School professors Hannaneh Hajishirzi and Noah A. Smith and Ph.D. student Jiacheng Liu, Xu developed infini-gram mini, an efficient exact-match search engine that is designed to work on internet-scale corpora with minimal storage needs. The system makes several open source corpora, such as Common Crawl, searchable, and it currently hosts the largest body of searchable text in the open-source community.
Using infini-gram mini, Xu and her collaborators revealed significant widespread contamination across standard LLM evaluation benchmarks. This is where the LLM training data inadvertently contains the test data. Their results raise concerns on how researchers measure artificial intelligence progress, and have sparked new conversations about evaluation integrity and LLM dataset transparency. As lead author, Xu presented the research at last year’s Conference on Empirical Methods in Natural Language Processing (EMNLP 2025) where she and the team received the Best Paper Award.
Xu is also interested in understanding the fundamentals of LLMs, such as tokenization that “shapes how the models interact with text.” She undertook the first systematic examination of how tokenization mechanisms prevalent in English fail in other languages with different morphology or writing systems. The paper presenting these findings, for which she was first author, is currently under review.
Kaiyuan Liu: Strengthening the reasoning capabilities of LLMs
Kaiyuan Liu
Kaiyuan Liu aims to build reasoning-capable AI models. His background in competitive programming informs his research as he develops tests and benchmarks for LLMs’ proficiency in reasoning and self-correction.
“My research goal is to understand and improve the reasoning capabilities of large language models,” said Liu. “This goal emerges from two converging paths: years of competitive programming, which trained me to value algorithmic precision and creativity, and a broader curiosity about how intelligent systems — biological and artificial — learn, reason and cooperate.”
Writing competitive programming problems is a time-consuming task, requiring programmers to set multiple variables and constraints including input distributions, edge cases and specific algorithm targets. This makes it an ideal test for general LLM capabilities. Liu and his collaborators, including Allen School professor Natasha Jaques, developed AutoCode, a closed-loop multi-role framework that automates the entire process of competitive programming problem creation and evaluation. AutoCode can detect with 91% accuracy whether or not a program is a valid solution to a given algorithmic problem. The framework has potential industry usefulness especially as more and more large companies attempt to leverage LLMs to write and submit code independently. The team will present their findings at the 14th International Conference on Learning Representations (ICLR 2026) in April.
Liu also helped develop a set of benchmarks to better evaluate LLMs’ reasoning capabilities in competitive programming. LiveCodeBench Pro is composed of high-quality, live-sourced programming problems from sources such as Codeforces and the International Collegiate Programming Contest (ICPC) that vary in difficulty and are continuously updated to reduce the chance of data contamination. The researchers paired large-scale LLM studies with expert annotations and found that frontier models are proficient in solving implementation-oriented problems, but struggle with complex algorithmic reasoning, nuanced problem-solving and handling edge cases — failing on some of the benchmark’s most difficult problems. LiveCodeBench Pro has already had industry impact as the benchmark was selected for the Gemini 3 launch evaluation. Liu and his collaborators presented LiveCodeBench Pro at the 39th Annual Conference on Neural Information Processing Systems (NeurIPS 2025) last December.
Outside of his research, Liu and his team were ICPC 2024 World Finalists for competitive programming. More recently, he coached the UW programming team Jiakrico that won first place at the ICPC PNW Regional competition last November.
Lindsey Wei: Building efficient LLM-powered data systems
Lindsey Wei
Modern data systems power nearly every aspect of our digital world, yet growing data complexity and heterogeneity make it increasingly difficult for systems to consistently interpret and process data at scale. Lindsey Wei focuses on developing “intelligent and reliable data systems that reason about data semantics to make data-driven decision-making more accessible.”
“Large language models open up new opportunities for how systems understand and interact with data,” said Wei. “But integrating these capabilities into data systems in a systematic way remains challenging.”
One setting where these challenges arise is table understanding, which focuses on recovering missing semantic metadata from web tables and is crucial for data integration. Existing LLM-based methods have limitations including hallucinations and lack of domain-specific knowledge. To address this, Wei, alongside Allen School professor and director Magdalena Balazinska, developed RACOON, a framework that augments LLMs with facts retrieved from a knowledge graph through retrieval-augmented generation (RAG) to significantly improve zero-shot performance. Next, Wei aims to extend the system via RACOON+, further improving its accuracy and robustness by strengthening how models link to and reason over external knowledge.
Inspired by how inference-time techniques such as RAG can unlock LLMs’ reasoning over structured data, Wei began exploring how to extend these reasoning capabilities to unstructured data processing — a longstanding challenge in data management. With a team of University of California, Berkeley and Google researchers, Wei developed MOAR (Multi-Objective Agentic Rewrites), a new optimizer for DocETL, an open-source system for LLM-powered unstructured data processing at scale. MOAR introduces a global search algorithm that explores a vast space of possible pipeline rewrites to identify those with the best accuracy–cost tradeoffs under a limited evaluation budget. In experiments across six real-world workloads, MOAR consistently discovered pipelines that were both more accurate and significantly cheaper than prior approaches. The team recently released a preprint of this work, highlighting the need to rethink how optimization is designed for LLM-powered data systems.
In addition to designing LLM-powered data systems, Wei has also helped develop a graphical user interface for MaskSearch, a system that accelerates queries over databases of machine learning-generated image masks, leading to improved model debugging and analysis workflows.
From left to right: Laura Sanità from Bocconi University, Yang Liu from Carnegie Mellon University, Allen School professor Thomas Rothvoss, along with Alon Rosen and Riccardo Zecchina from Bocconi University.
Allen School professor Thomas Rothvoss has carved a career out of complexity. As a member of the school’s Theory of Computation Group, Rothvoss examines the theoretical limits of computer algorithms that are designed to analyze large, complex datasets. His aim is to settle long-standing problems in the field of combinatorial optimization — an area where he has made notable progress since his arrival at the University of Washington in 2014.
“I work in theoretical computer science and discrete optimization,” said Rothvoss, who also holds the Craig McKibben and Sarah Merner Professorship in the UW Department of Mathematics. “Over the years my focus has changed a bit. During my Ph.D., I worked on approximation algorithms which deal with finding provably good solutions to NP-hard problems in polynomial time. Later, I moved more towards discrepancy theory and theoretical aspects of linear and integer programming.”
Last month, Rothvoss collected the inaugural Trevisan Prize in the mid-career category for his breakthrough contributions in the study of optimization problems. The award, which is sponsored by the Bocconi University Department of Computing Sciences and the Italian Academy of Sciences, is named for the late Luca Trevisan in recognition of his major innovations to computing theory.
Rothvoss’ own list of innovations includes a new approximation algorithm for solving the bin-packing problem in polynomial time, one of computer science’s classical combinatorial optimization problems. In the bin-packing problem, the goal is to find the fewest number of identical bins that can hold a list of items, without exceeding each bin’s capacity. By leveraging a combination of a novel two-stage packing method — where first they pack items into containers and then put those containers into bins — and making use of discrepancy theory techniques, Rothvoss and his former UW Department of Mathematics Ph.D. student and current postdoc Rebecca Hoberg introduced an algorithm that achieves an additive gap of only O(log OPT) bins, significantly improving on previous results. The paper received the Best Paper Award at the 25th annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2014).
He was also recognized for his work addressing another open question central to the field of combinatorial optimization. Multiple authors had already established that various polytopes have exponential extension complexity for NP-hard problems. However, Rothvoss wanted to see if the same was true for polytopes that admit polynomial time algorithms to optimize linear functions. In a paper titled “The matching polytope has exponential extension complexity,” Rothvoss was able to prove that linear programming cannot be used to solve the matching problem in polynomial time, advancing theoreticians’ understanding of the topic. He earned both the 2018 Delbert Ray Fulkerson Prize and the 2023 Gödel Prize for the work — the former honoring exceptional papers in discrete mathematics, the latter for outstanding papers in theoretical computer science
More recently, Rothvoss and his former Ph.D. student Victor Reis (Ph.D. ‘23) utilized geometric tools to develop a faster new algorithm for solving integer programming problems within a fixed set of variables. Integer programming is used for optimizing problems with whole number amounts, but many of the algorithms previously introduced for these types of problems have been quite slow based on the number of steps required. Instead, the researchers resolved a version of the Subspace Flatness Conjecture, and proved a new upper bound on the time required to solve for any integer program. While the breakthrough is theoretical, the research has opened up new questions for theoreticians to pursue, earning Rothvoss a Best Paper Award at the 64th IEEE Symposium on Foundations of Computer Science (FOCS 2023).
In the 1995 movie “Clueless,” lead character Cher Horowitz has a digital closet that allows her to virtually try on outfits. Proving some ideas never go out of style, Allen School professor Ira Kemelmacher-Shlizerman has spent the past two decades working to make that and other futuristic technologies a reality.
“My general area is at the intersection of computer vision and graphics, or generative media. I am excited about all aspects of image and video generation and the applications we can build on top of it. A big passion of mine is to model people and clothing from large photo collections,” said Kemelmacher-Shlizerman, director of the UW Reality Lab and member of the UW Graphics & Imaging Group (GRAIL). “I am currently working to push the boundaries of human and clothing modeling to new levels, going to extreme quality and details.”
The Institute of Electrical and Electronics Engineers (IEEE) recently recognized her in the 2026 class of IEEE Fellows, one of the organization’s highest honors, for her “contributions to face, body, and clothing modeling from large image collections.” The IEEE Fellows represent members with an exceptional record of accomplishments in their field who bring the “the realization of significant value to society at large.”
Kemelmacher-Shlizerman pioneered the field of face modeling from large photo collections in the wild. She helped develop the first photometric stereo method that can reconstruct three-dimensional face models from unstructured photos pulled from the Internet, launching thousands of follow-up papers. As the field progressed, Kemelmacher-Shlizerman and her collaborators introduced the MegaFace Benchmark, the first million-image scale image dataset for evaluating facial recognition algorithms. The work served as the field’s standard benchmark for many years. Kemelmacher-Shlizerman then developed a personalized image search engine that allows users to imagine how they could look with various hairstyles, or in different time periods or ages — anything that can be queried in the search engine. She later commercialized the technology through her startup Dreambit, which was acquired by Meta.
Since then, Kemelmacher-Shlizerman has transitioned from helping users test out new hairstyles to trailblazing virtual try-on technology, allowing for real clothing to be rendered on a human body. She and her team introduced the first use of conditional generative adversarial networks for photorealistic try-on. In addition to virtual try-on technology for photos, Kemelmacher-Shlizerman developed Fashion-VDM, a video diffusion model for generating virtual try-on videos to help users see the garment from multiple angles and understand how it flows and drapes in motion.
“We are working on detailed measurement of humans and clothing to enable fit aware virtual try-on, as in going beyond generative visualization to providing metrically correct measurement based try-on to create better than physical try-on technology,” said Kemelmacher-Shlizerman.
Her work as a principal scientist at Google, where she leads Google Shopping’s Generative Media team, is bringing this technology to the mainstream. Kemelmacher-Shlizerman and her team introduced a generative AI tool for Google Shopping where users can see how an article of clothing looks on a range of models of different sizes, body shapes and skin tones. Last December, her team launched an upgraded version of the virtual try-on tool that generates a full body digital version of a user to help them see how clothes would look like on their own body.
“Ira is arguably the foremost researcher in the field of virtual try-on technology. She has written several seminal papers, pioneering the use of generative AI for this use case, and is the chief technologist behind Google’s launch of this technology in their search and shopping products,” said Allen School professor Steve Seitz, who co-directs the UW Reality Lab. “While this research area is relatively new, it’s already having a major impact on industry.”
In addition to being named an IEEE fellow, Kemelmacher-Shlizerman has been recognized as an Association for Computing Machinery (ACM) Distinguished Member and received a Google Faculty Award, GeekWire Innovation of the Year Award, and the Madrona Prize.
From left to right: EMNLP program chairs Violet Peng and Christos Christodoulopoulos, lead author Hao Xu, EMNLP general chair Dirk Hovy and program chair Carolyn Rose.
Large language models (LLMs) such as ChatGPT are trained using massive text datasets downsampled from the Internet. As these language models become more popular and widespread, it becomes increasingly important to understand the composition of the data source and how it affects the model’s behavior. The first step is to make these texts searchable.
Current exact-match search engines are limited by their high storage requirements, hindering their application on extremely large-scale data. With previous methods, storing the Internet-size text index would cost around $500,000 per month. To make searching on such a large scale more efficient and affordable, a team of University of Washington and Allen Institute for Artificial Intelligence (Ai2) researchers developed infini-gram mini, a scalable system that uses the compressed FM-index data structure to index petabyte-level text corpora.
“We developed infini-gram mini, an efficient search engine designed to handle exact-match search on arbitrarily long queries across Internet-scale corpora with minimal storage overhead,” said Allen School undergraduate student and lead author Hao Xu. “Infini-gram mini hosts the largest body of searchable text in the open-source community.”
While the FM-index has been widely used in bioinformatics, the team was the first to apply it to natural language data at the Internet scale. The infini-gram mini system improves on the best FM-index implementation, achieving an 18 times increase in indexing speed and a 3.2 times reduction in memory usage. The resulting index needs only 44% as much storage as the raw text, which is only 7% of what the original infini-gram required.
“In infini-gram mini, we combined advanced algorithms and data structures and scaled-up engineering to tackle real, pressing challenges in AI. It is a very unique combination,” said co-author and Allen School Ph.D. student Jiacheng Liu. “The most interesting part was that we revitalized a data structure repo that hasn’t been maintained for almost 10 years, armed it with modern parallel computing, and scaled it up to the sky to handle Internet-scale data with low compute needs. We almost built a Google Search without Google-level budget.”
To showcase infini-gram mini’s search capabilities, the researchers used the system to conduct a large-scale benchmark contamination analysis to see if the training data of LLMs inadvertently contains their test data. They found that many widely-used evaluation benchmarks appeared heavily in these corpora, which could lead to an overestimation of the language model’s true capabilities as it enables models to retrieve memorized answers from training data rather than performing task-specific reasoning. Alongside infiini-gram mini, the team also released a benchmark contamination monitoring system, with the goal of encouraging more transparent and reliable evaluation practices in the community, explained Xu.
Additional authors include Allen School professors and Ai2 researchers Hannaneh Hajishirzi and Noah A. Smith, along with Allen School affiliate faculty member Yejin Choi, faculty member at Stanford University.
Allen School Ph.D. student Liwei Jiang (center) and affiliate faculty member and Stanford University professor Yejin Choi accept the Best Paper Award at the NeurIPS 2025 conference.
If you ask 10 different people to write a metaphor about time, you might get 10 unique responses. However, if you give the same prompt to 10 large language models (LLMs), you may end up receiving similar outputs across the board — almost like they are all part of a hivemind.
These LLMs often struggle to generate distinct and human-like creative content, yet scalable methods for analyzing the diversity of LLM responses are limited beyond tasks such as random name or number generation. Recently, a team of University of Washington researchers developed Infinity-chat, a benchmark dataset featuring 26,000 real-world, open-ended queries paired with more than 31,000 human preference annotations. That work, which was led by Allen School Ph.D. student Liwei Jiang, allows for a systemic evaluation of the creative generation of artificial intelligence models.
“This research reveals a critical limitation in large language models: despite their diversity of architectures and training approaches, LLMs produce strikingly homogeneous outputs on open-ended queries, a phenomenon we termed the ‘Artificial Hivemind,’” said co-author Yulia Tsvetkov, who holds the Paul G. Allen Career Development Professorship in the Allen School.
With Infinity-chat, Tsvetkov and her collaborators introduced the first comprehensive taxonomy of open-ended LLM queries. The researchers broke down the different queries that users pose to language models into six high-level categories and 17 fine-grained subcategories such as problem solving or speculative and hypothetical scenarios. Of the high-level categories, creative content generation (58%) and brainstorming and ideation (15.2%) were among some of the most common — emphasizing users’ reliance on LLMs for direct inspiration and thought.
The team then used Infinity-chat to conduct a large-scale study of mode collapse in LLMs. After evaluating more than 70 LLMs using real-world, open-ended questions, they found an “Artificial Hivemind” effect. This phenomenon is characterized by both intra-model repetition, where the same model fails to produce diverse responses, and inter-model homogeneity, which is where different models generate similar outputs. These insights can help guide future research into mitigating the long-term AI safety risks associated with the Artificial Hivemind.
“Testing over 70 models from major AI developers, our study found systematic convergence on similar responses to open-ended queries, raising concerns about groupthink in AI systems that could lead to shared blind spots and correlated errors,” said Tsvetkov. “These findings have direct implications across critical application areas including AI for science, medicine, education, decision support and many others, where robust reasoning across diverse perspectives is essential.”
Additional authors include Allen School Ph.D. students Margaret Li and Mickel Liu; alumni Raymond Fok (Ph.D., ‘25), now at Microsoft, and Maarten Sap (Ph.D., ‘21), now faculty at Carnegie Mellon University; Yuanjun Chai, a student in the UW Department of Electrical & Computer Engineering Daytime Master’s Program (MSEE); Nouha Dziri at the Allen Institute for Artificial Intelligence (Ai2); and Allen School affiliate faculty member Yejin Choi, professor at Stanford University.
From left to right: co-author Jacob Wobbrock, lead author Shaun Kane, chair of the SIGACCESS Awards Committee Jeff Bigham, senior author Richard Ladner and co-author Chandrika Jayant.
Mobile devices — including cell phones, music players, GPS tools and game consoles — have helped people with disabilities live independently, however, these technologies still come with their own accessibility challenges. In a user study, a team of Allen School and University of Washington Information School researchers examined how participants with visual and motor disabilities select, adapt and use mobile devices in their everyday lives.
At the 27th International ACM SIGACCESS Conference on Computers and Accessibility in Denver, Colorado, in October, the authors received the SIGACCESS ASSETS Paper Impact Award, recognizing an ASSETS conference paper from 10 or more years prior that has had “a significant impact on computing and information technology that addresses the needs of persons with disabilities.”
“At the time of our user study, touchscreen smartphones like the iPhone were not accessible and blind people were used to special feature phones with buttons that were accessible. Blind people might have separate mobile devices for activities like phone calls, navigation and playing music,” explained senior author and Allen School professor emeritus Richard Ladner. “The user study in ‘Freedom to Roam’ opened up the possibility that there could be one accessible mobile device that could be used for many purposes: phone, navigation tool, music player and more. This desire of blind participants became a reality.”
Through a series of interviews and diary studies, the researchers outlined the most important issues for people with visual and motor disabilities when using their mobile devices. Participants noted struggles such as too small or low contrast on-screen text and lack of exposed, tactile buttons, which previous studies had established. This study, however, also identified additional challenges. For example, environmental factors can impact how mobile devices function. Low vision participants mentioned that their screens were only readable under ideal lighting conditions. Other participants explained that it was difficult to use their mobile devices while walking, as it reduced their motor control or their situational awareness by making it hard to hear sounds in the environment.
Despite the many accessibility problems participants encountered, they also showcased strategies for successfully working with troublesome devices. For example, one participant with a motor impairment was able to effectively use his mobile phone by placing it in his lap to dial. When possible, participants modified their device, such as increasing the text size or installing access hardware such as screen readers; however, the device settings often did not have enough flexibility to meet their needs.
In the years after the paper’s publication, many of the participants’ desired features have materialized. Multiple participants anticipated that many of the functions they requested, such as screen readers, speech input and optical character recognition technology, could be combined into a single device in the modern smartphone instead of separate ones. Two users in the study also mentioned that user-installable apps could increase accessibility — predicting the Apple App Store and Google Play.
Additional authors include Jacob Wobbrock, UW Information School professor and Allen School adjunct faculty member; Allen School alum Chandrika Jayant (Ph.D., ‘11), head of design at Be My Eyes; and Shaun Kane, who received his Ph.D. from the UW Information School and is now a research scientist in responsible AI at Google.
From developing recyclable electronics to leveraging artificial intelligence in estimating carbon footprints, Allen School Ph.D. student Zhihan Zhang is tackling the challenge of sustainability on multiple fronts.
“The goal is to build a new generation of ubiquitous computing that is sustainable by design — from its physical materials to accelerated decision-making by autonomous AI — and in how it values human effort and ultimately augments everyone’s health,” Zhang said. “This is a massive, interdisciplinary problem. Our current computing ecosystem was not designed for this, so we can’t just solve it with better software or AI models.”
In his research, Zhang, who is co-advised by Allen School professors Vikram Iyer and Shwetak Patel, focuses on sustainable ubiquitous computing. He aims to reimagine computing ecosystems by integrating emerging sustainable materials and new sensing paradigms. For his contributions, Zhang was awarded a 2025 Google Ph.D. Fellowship in health research. The fellowship supports exceptional graduate students from around the world who represent “the next generation of scientists focused on critical foundational science.”
Printed circuit boards (PCBs) make up a large portion of the world’s environmentally hazardous electronic waste — almost all electronic devices contain a PCB, and their interconnected parts make them nearly impossible to recycle. As a more sustainable alternative, Zhang and his collaborators introduced a new PCB that can be repeatedly recycled with minimal material loss and performs on par with traditional PCBs made of hard plastic. The team found that these new PCBs could lead to an almost 50% reduction in the global warming potential compared to conventional PCBs.
Beyond reducing electronic waste, Zhang develops AI models to help users better understand the environmental impact of everyday decisions. Life Cycle Assessments (LCAs) provide a framework for evaluating the environmental impact of a product, service or process throughout its entire life cycle, but they can be difficult for non-experts to navigate. To help bridge this gap, Zhang and his team introduced the first autonomous sustainability assessment tool that transforms unstructured natural language descriptions into interactive environmental impact visualizations. He also helped reimagine the labor-intensive process of LCAs in another way through a multi-agent AI system that can autonomously generate life cycle inventories and estimate the environmental impact of electronic devices using public data sources.
For Zhang, however, sustainability extends beyond addressing climate change to encompass issues such as personal health and public well-being. Alongside a team of Google researchers, Zhang helped develop a personal health agent that analyzes data from wearable devices and medical records to provide personalized, evidence-based guidance.
“It’s been incredible to see the breadth and depth of Zhihan’s research on how his work in AI touches sustainability, health, materials, sensing and HCI,” said Patel, who holds the Washington Research Foundation Entrepreneurship Professor in Computer Science & Engineering and Electrical & Computer Engineering.
Next, Zhang plans to launch a startup to help translate his research into more practical, real-world solutions. He is also designing agentic workflows to automate diverse scientific challenges including large AI models for sustainable material discovery and seismological analysis.
In addition to receiving a Google Ph.D. Fellowship, Zhang has also been named a Heidelberg Laureate Forum Young Researcher and has received a Bob Bandes Award honorable mention, which recognizes exceptional teaching assistants in the Allen School.
“Zhihan stands out for his breadth, transcending disciplinary boundaries to tackle hard problems in sustainability across the computing stack,” Iyer said. “He can both fabricate recyclable circuits in a wet lab and build AI systems to automatically calculate a device’s carbon footprint. I’m excited for his next steps applying insights from this work on sustainability to a broader set of impactful problems in health care and accelerating scientific discovery.”
The Nickelsville Northlake tiny house village. Photo by Kurtis Heimerl.
Modern Internet of Things (IoT) technologies such as smart home devices, sensors and security cameras have transformed how we interact with our physical environments. As University of Washington researchers discovered, they also have the potential to help residents living in tiny house villages, a type of emergency shelter for those experiencing homelessness.
Through a series of visits and participatory design workshops with residents of two tiny house villages in Seattle, Washington, a team led by Allen School professor Kurtis Heimerl (B.S., ‘07) found ways for residents to leverage and augment their living spaces with smart technologies. The researchers also identified challenges such as land ownership and privacy concerns that residents need to balance with sensor design and deployment. Their paper “Participatory Design in Precarity: ‘Smart’ Technologies for Tiny House Villages” received an honorable mention at the 28th ACM SIGCHI Conference on Computer-Supported Cooperative Work & Social Computing (CSCW) in October.
To learn more about the team’s findings, we spoke with Heimerl and Allen School postdoc Esther Han Beol Jang (Ph.D., ‘24), first author on the CSCW paper.
What are tiny house villages?
KH: Tiny house villages are a Pacific Northwest transitional housing solution. The idea is basically a spot for unhoused people to get their own space. There’s two organizations that run them — the Low Income Housing Institute, which partners with the city, and the Nickelsvilles, which are independent and cooperatively run tiny home villages and the ones used in this study.
EJ: These are communal living arrangements, so they have shared kitchens and bathrooms, and the Nickelsvilles have a self-organized structure where the residents actually run the day-to-day operations.
What are some of the problems in tiny house villages that IoT technologies can help address?
Kurtis Heimerl
KH: One of the themes that came out was infrastructure management — this is stuff like water and power. These infrastructures are often ad hoc and precarious. For instance, the Central District tiny house village has its water coming from a hose on the neighbor’s house, and this is water for approximately 20 people, including showers and washing machines. One of the big results of the paper was that there was a need to engage with helping them monitor and evaluate failures across these ad hoc infrastructures in the village.
EJ: In the paper, we come up with design principles for technologies that are well-suited for outdoor encampments. All of these sensors that we want to put on — like the water supply, for example, to see whether there is a clean water supply, whether the water is flowing or if their water has been cut off — need to be easy to put on and easy to remove as well as portable.
What challenges did the team run into with implementing these technologies?
KH: Cost is a big factor. We identified some water quality sensors that could do the job asked of them that were thousands of dollars and clearly unsustainable for the space that’s run as tightly as Nickelsville.
EJ: One of the things about the way current IoT technologies are designed is there is an owner, and that owner has an online account for configuring the device or registering it, or doing some kind of integration with it, usually in the cloud. There is a churn of users in these transitional housing, but you need to have organizational continuity for the ownership of the sensors, and that can be really tricky with existing technologies.
The other thing that goes on with the cloud connectivity of all of these devices is that a lot of them report data back to the manufacturer, or have data that’s stored in the cloud — that is something that the villagers didn’t want. It would be best to have IoT technologies that report to an internal platform or a locally hosted platform, but a lot of technologies are not set up to do that.
What do you hope others take away from this research?
Esther Han Beol Jang
KH: Our experience is that these tiny house villages are a successful solution in the homelessness space, and something that should be encouraged. The barriers we saw are the barriers that they experience even outside of IoT and technology spaces, and that makes it really difficult, so it’s really amazing to see their success despite that difficulty.
Lastly, this is a really interesting research environment. It’s underresearched. These are populations that are assumed to have very little digital abilities, very little resources and despite those limitations there’s a lot of capacity and interest and value in doing technology work with them.
EJ: Homelessness is a problem that crosses a lot of boundaries, and a lot of people end up homeless who you would never expect. Those people may pass through after just a short time, or they may be there for years, and so you do get people who are technologically savvy in these villages. It’s a huge missed opportunity if we don’t engage with those people, and help them get out of their situation.
Additional authors include Jennifer Webster, research project manager at the Allen School, Ph.D. student Kunsang Choden and professor Jason Young at the UW Information School, Emma Jean Slager, a professor in the UW Tacoma School of Urban Studies, and Christopher Webb, a faculty member at Seattle Central College.
When analyzing a large dataset, it can be overwhelming trying to figure out where to look for interesting insights. Visualizing the data can help, but manual chart specifications can also be a daunting and time-consuming task for users without extensive domain expertise. To help make datasets easier to understand and examine, in 2015, a team of researchers led by Allen School professor Jeffrey Heer introduced Voyager, a system that automatically generates and recommends charts and visualizations based on statistical and perceptual measures — allowing users to efficiently explore different parts of the dataset they may not have discovered before.
Since its release, Voyager has been called a “landmark development” and has helped transform visualization technology from human-led interactive visualization to a mixed-initiative approach. At IEEE VIS 2025 earlier this month in Vienna, Austria, Heer and his co-authors were recognized with the InfoVis 10-Year Test of Time Award for the Voyager paper’s lasting impact on the field. With the advancement and increasing popularity of artificial intelligence tools, the award committee noted, the system’s mixed-initiative approach is even more relevant today.
“The Voyager paper is exciting to me for many reasons. The work introduced a new approach to visualization recommendation and how to richly incorporate recommendations within user interfaces — all of which has proved influential to ongoing research,” said Heer, who holds the Jerre D. Noe Endowed Professorship in the Allen School. “To provide a solid representation for reasoning about visualizations and recommending charts, we also invented Vega-Lite, a high-level language for statistical graphics that went on to become a popular open source tool in its own right.”
Underlying Voyager is the Compass recommendation system which takes in user selections, data schema and statistical properties and generates suggested visualizations in the form of Vega-lite specifications. For example, an analyst looking at a dataset about cars can use Voyager to examine the effect of different variables such as horsepower, number of cylinders and acceleration. If the analyst is interested in looking at horsepower, they can browse charts with varied transformations of horsepower in the Voyager gallery and find the car with the highest horsepower. Voyager can also recommend visualizations with additional variables to help the analyst see if there are potential correlations or pursue other questions.
In a user study comparing Voyager to a visualization tool modeled after Tableau, the researchers found that Voyager helped users explore more of the data, leading to a significantly greater coverage of unique variable combinations.
Additional authors include Kanit Wongsuphasawat (Ph.D., ‘18), visualization team lead at Databricks; Dominik Moritz (Ph.D., ‘19), faculty at Carnegie Mellon University; Anushka Anand, director of product management at Salesforce; Jock Mackinlay, former technical fellow at Tableau; and Allen School adjunct faculty member Bill Howe, a professor in the University of Washington Information School.