Skip to main content

Four Allen School undergraduates receive national recognition for research contributions in AI, HCI, accessibility and more

The University of Washington "W" stands in the foreground with the Drumheller Fountain spraying in the background.
Photo by University of Washington

Earlier this month, the Computing Research Association (CRA) honored a group of undergraduate students from across the country who have made notable contributions to the computing research field. Four Allen School undergraduates received honorable mentions in the CRA Outstanding Undergraduate Researcher Awards competition — Chun-Cheng Chang, Ritesh Kanchi, Alexander Metzger and Kenneth Yang.

Many students first develop their research skills in their Allen School courses. Professors Leilani Battle and Maya Cakmak, who co-chair the undergraduate research committee, designed a sequence of seminars that introduce students to research and give them the opportunity to work on a hands-on research project with a faculty or graduate student mentor. 

“Participating in research allows students to exercise their creativity and apply their computing skills to advance science and technology, in a way that is not possible in their classes,” Cakmak said. “Undergraduate research is often the first step toward a research career and the Allen School is committed to enabling as many of our students to experience research and have those career opportunities.”

Research opportunities can help students realize new and unexpected applications for computer science.

“Research reveals a side of computer science that students might not see in a class or a typical internship, such as how computer science can lead to community-driven initiatives, exciting opportunities with nonprofit organizations and even founding your own startup company,” Battle said.

From helping farmers improve their networks to sell their products to developing innovative ways for children to interact with technology, these CRA-recognized students have shown that computer science research does not have to look like how you expect it to be. 

Chun-Cheng Chang: Pushing the boundaries of wireless communication, robotics and AR

headshot of Chun-Cheng Chang
Chun-Cheng Chang

Chun-Cheng Chang’s passion for combining hands-on creativity with technical innovation began when he founded his middle school’s carpentry club. Since then, Chang has focused on developing cutting-edge “solutions that push the boundaries of wireless communication, robotics and augmented reality (AR) interfaces.”

“My research interest lies within the paradigm of low-power sensing and human-computer interaction (HCI). My low-power sensing work focuses on the practical solutions for battery-free systems that may be used in environmental monitoring and health sensing,” Chang said. “For HCI, I designed systems that benefit different applications, including improving robotics manipulation data collection, user experience and designing large language model (LLM) assistance systems for mobile health data analysis.”

Analog backscatter communication is promising for use in low-power wireless devices as it can send messages with less energy compared to popular methods such as Bluetooth. However, existing systems are limited in their transmission range and security. Instead, Chang, as part of the Allen School’s UbiComp Lab led by professor Shwetak Patel, helped develop an analog backscatter tag that reaches a transmitting range of up to 600 meters in outdoor line-of-sight scenarios, compared to state-of-the-art systems that can only transmit a maximum of 20 meters. 

Chang has also turned his research skills toward building artificial intelligence (AI) assistants for mental health. The UbiComp Lab’s project looked at how LLMs could be used to interpret ubiquitous mobile health data from smartphones and wearables to generate mental health insights. Chang helped finetune and train three LLMs for mental health evaluation, and he conducted user study interviews with clinicians. The team’s study revealed an opportunity for clinicians and patients to use collaborative human-AI tools to explore self-tracking data. Their research was published in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT 2024).

In addition to wireless sensing systems and AI, Chang’s research spans the intersection between sensing and robotics. He helped design millimeter scale battery-free microrobots that can be used in environmental monitoring or exploration. When Chang joined the project, he noticed issues with the MilliMobile microrobot prototype’s precise motor and wheel alignment. To address this, Chang redesigned and improved the prototype’s schematics, printed circuit board and software. Chang’s changes helped the team, alongside collaborators at Columbia University, develop Phaser, a system framework that enables laser-based wireless power delivery and communication for microrobots.

As robots become more popular, Chang has also introduced software to help everyday users control the devices. He worked with professor Ranjay Krishna, who co-directs the Allen School’s RAIVN Lab, to design EVE, an iOS application that allows users to train robots using intuitive AR visualizations. With the app, users can preview the robot’s planned motion and see the robot arm’s reach-range boundary for the action. Chang and his collaborators presented their research at the 37th Annual ACM Symposium on User Interface Software and Technology (UIST 2024).

Ritesh Kanchi: Designing AI tools for accessibility and education

Headshot of Ritesh Kanchi
Ritesh Kanchi

Ritesh Kanchi was drawn to the HCI research community because it gave him “the opportunity to design technologies for those who benefit most; everyone stands to gain from HCI — it’s all about the humans.” His work as an undergraduate researcher tackles issues across HCI, computer science education and accessibility. 

“Across my research experiences, I’ve defined my focus on developing Intelligent User Interfaces (IUIs) that empower niche communities in accessibility and education,” Kanchi said. “My work is rooted in creating context-aware, personalized and adaptive interfaces that address diverse user needs, making traditionally inaccessible experiences more inclusive.”

Since his first quarter at the University of Washington, Kanchi has been interested in designing new technology for children. He joined the Information School’s KidsTeam research group exploring how children use emerging generative AI tools to support, rather than take over, creative activities such as stories, songs or artwork. Their research received an honorable mention at the ACM CHI conference on Human Factors in Computing Systems (CHI 2024). In collaboration with the University of Michigan, Kanchi and his KidsTeam co-authors investigated how children understand computational data concepts such as “the cloud” and their privacy implications. They presented their paper at the 23rd annual ACM Interaction Design and Children Conference (IDC 2024) —  the top conference in design of technology for children. 

Kanchi’s research, however, is not limited to young children. Kanchi was selected for the U.S. National Science Foundation Research Experiences for Undergraduates (NSF REU) program where he worked with researchers at Carnegie Mellon University to analyze the effects of Intelligent Tutoring Systems (ITS) on middle schoolers’ learning rates and initial knowledge compared to traditional paper practice. 

“How children interact with technology today influences the way that adults interact with technology tomorrow,” Kanchi said. 

His HCI research continued at the Allen School’s Makeability Lab led by professor Jon Froehlich. Kanchi worked with Allen School Ph.D. student Arnavi Chheda-Kothary to investigate using AI for accessibility, especially among mixed visual-ability families. Many children express themselves through artwork that is primarily visual and non-tactile, such as using crayons, markers or paints, which can make it difficult for blind or low-vision (BLV) family members to experience and engage with the art. 

To tackle this issue, Kanchi helped develop ArtInsight, a novel app that allows mixed visual-ability families to use AI to engage with children’s artwork. Previous research explored how AI can help BLV family members read with children, but none focused on art engagement. The app uses large language models to generate creative and respectful initial descriptions of the child’s artwork as well as questions to facilitate conversations between BLV family members and children about the art. For Kanchi, this research project was “a culmination of who I am as a researcher, combining my passions and prior research, including children’s design research, user interface development and accessibility.”

Kanchi’s passion for accessibility has even benefited his Allen School community. As a lead teaching assistant for CSE 121, the first course in the Allen School’s introduction to computer programming series, he converted course content to make it compatible with screen readers. He is working with Allen School professors Matt Wang and Miya Natsuhara on designing a system for creating accessible course diagrams in different content modalities including data structures such as binary trees and linked lists.

Alexander Metzger: Using mobile technologies for environmental sustainability and agricultural economic development

Headshot of Alexander Metzger
Alexander Metzger

From helping farmers in developing nations expand their networks to making it easier for users to understand the environmental impact of their electronics, Alexander Metzger has set his sights on making a global impact with his work.

“My research aims to bridge gaps in information access for underserved languages and communities through a combination of algorithm design, machine learning and edge device technology,” Metzger said.

During his first year at the Allen School, he joined the Information and Communication Technology for Development (ICTD) Lab, co-led by professors Richard Anderson and Kurtis Heimerl, to work on the eKichabi v2 study. Smallholder farmers across Tanzania often lack accessible networking platforms making them reliant on middlemen to sell their produce. The research team, alongside collaborators at Cornell University, collected the largest agricultural phone directory to date; however, they still faced the challenge of ensuring that the digital directory was accessible to users with limited internet and smartphone access. 

To tackle this issue, Metzger helped develop and maintain an Unstructured Supplementary Service Data (USSD) application that allowed users to access the directory using simple menus on basic mobile phones. As the project advanced, he analyzed the USSD application’s technical performance to answer the question of how well an information system could address the divide between basic phones and smartphones. Metzger was the co-lead author on the paper examining the application’s HCI and economic challenges published at the ACM CHI Conference on Human Factors in Computing Systems (CHI 2024).

Continuing his research into mobile technologies, Metzger turned his attention toward addressing information access challenges that other smallholder farmers face. Smallholder farmers across countries including Kenya, India, Ethiopia and Rwanda often lack access to trustworthy sources to learn about sustainable farming practices and how to tackle issues such as crop disease. Metzger worked with Gooey.AI to help develop and deploy Farmer.CHAT, a GPT4-based, multilingual platform that helps extension workers with personalized advice and guidance. Farmer.CHAT presented their platform in front of the United Nation General Assembly.

Environmental decision making is not just a problem for farmers. As a member of the Allen School’s UbiComp Lab, Metzger is working on developing multi-modal hybrid machine learning and computer vision algorithms that can make environmental impact estimates more accessible. He is also designing a battery-free balloon and glider-based sensor systems that provide developing countries with the data to help predict and prevent environmental disasters, including wildfires.

Outside of his Allen School work, Metzger recently founded Koel Labs, a research-focused startup that trains open-source audio models to help language learners improve their pronunciation.

Kenneth Yang: Removing inefficiencies in software engineering and neuroscience

headshot of Kenneth Yang
Kenneth Yang

When Kenneth Yang solves a problem — whether it is a tricky programming puzzle or a neuroscience research headache — his first thought is, “there has to be a better way to do this.”

“Because our lives are technology-infused, I turned to computer science as a tool to help me solve problems efficiently, predictably and accurately,” Yang said. “My research focuses on identifying and solving inefficiencies in existing processes and aims to improve algorithms and methods used in computer graphics, software engineering and neuroscience.”

As a research assistant to Allen School professor Michael Ernst in the Programming Languages and Software Engineering (PLSE) group, Yang is turning his problem-solving skills toward the challenge of merging code. Version control systems are merge tools that allow multiple developers to work on a code at the same time and then combine their edits into a single version. While there are many algorithms and merge tools available to automatically handle merging code, they often fail — leading to unexpected bugs and developers having to turn their attention to manually resolving and integrating changes.

However, these different merge tools, including new approaches, have not been evaluated against each other. The limited experiments that have been performed have excluded systems such as Git, the dominant version control system, or have only measured the benefits of correct merges, not the costs of incorrect ones. Yang and his collaborators addressed these problems and showed that many earlier merge tools are less effective than previously thought. They presented their results in their paper “Evaluation of Version Control Merge Tools” that appeared at the 39th IEEE/ACM International Conference on Automated Software Engineering (ASE 2024).

Yang is also using his software skills to help neuroscientists accelerate their research. Neuroscientists gather brain data using electrophysiology, where they insert probes into the brain to detect neurons’ electrical activity. The small scale of different brain regions alongside other factors can make the process difficult and hard to use in larger studies. 

Instead, Yang’s research with the Virtual Brain Lab group within the UW Department of Neurobiology & Biophysics’ Steinmetz Lab develops software and tools that can make electrophysiology experiments more efficient, repeatable and automated. He co-developed the software platform Pinpoint that allows neuroscientists to interactively explore 3D brain models, plan experiments and control and maneuver the robotic electrode manipulators into the correct location. 

To work alongside Pinpoint, he designed and wrote the Electrophysiology Manipulator Link, or Ephys Link, a unifying communication platform for popular electrophysiology manipulators that translates planning data into specific robotic movements. Yang’s tools have reduced the insertion process from 15 minutes per probe to 15 minutes total and with minimal researcher intervention. In future research, he aims to enable researchers to fully automate electrophysiology and make complex multi-probe studies possible.

In addition to honoring these four undergraduates at UW, the CRA recognized another student with an Allen School connection. Awardee Gene Kim, an undergraduate student at Stanford University, has collaborated with Allen School professor Jen Mankoff on the development of assistive technology to help make data visualization, personal fabrication and design tools more accessible for blind and visually impaired people.

Read more about the CRA Outstanding Undergraduate Research Awards here.