Skip to main content

A picture of health: Google Fellowship recipient Xin Liu combines machine learning and mobile sensing through an equity lens to support remote health assessment

Portrait of Xin Liu in dark blue button-up shirt and glasses standing outdoors with fall foliage and buildings blurred in the background.

When COVID consigned doctor-patient interactions from the clinic to a computer screen, Allen School Ph.D. candidate Xin Liu already had his finger on the pulse of that paradigm shift. Since his arrival at the University of Washington in 2018, Liu has worked with professor Shwetak Patel in the UbiComp Lab to combine mobile sensing and machine learning to real-world problems in health care, with a focus on developing non-contact, camera-based physiological screening and monitoring solutions that are accessible by all and adaptable to wide range of settings. His goal is to “democratize camera-based non-contact health sensing for everyone around the world by making it accessible, equitable, and useful.”

Liu embarked on his undergraduate education in the US as an international student realizing how difficult it was to integrate into a new culture. As an undergraduate student at UMass Amherst, he became the first international student consultant and peer-mentor. He also encouraged other international students to embrace leadership roles. Liu recounted that, “this experience motivated my research in computer science and health where I aimed to develop useful and accessible computing technologies for diverse populations.”

Liu’s research would take on new meaning and urgency as the pandemic upended modern social interactions. As remote clinical visits increased significantly during this time, the need for remote ways of sensing and monitoring heart health became critical for medical practitioners and patients alike. Liu’s work combining camera-based physiological sensing with machine learning algorithms could offer new possibilities in early detection of heart-related health issues as well as allow for much-needed remote diagnostics when a patient faces barriers to obtaining in-person care at a clinic — even outside of a pandemic. 

As Liu saw it, there were several major issues that needed to be considered, and, in some cases, corrected in order for camera-based health assessment to be widely applicable. From a practical standpoint, the tool would have to obtain a high level of accuracy for medical practitioners to evaluate patients’ vital signs remotely in order to make informed clinical decisions. Privacy is another major critical concern when it comes to people’s personally identifiable medical information; because of this, any collected data would need to be held locally on the device.

That device could come with varying capabilities — or lack thereof. 

“Since people have access to a wide range of devices, the application has to function on even the most rudimentary smartphone,” Liu explained. “Likewise, people in resource-constrained settings may not consistently have connectivity, so the application would need to be capable of running without being connected to a network.”

Disparate access to resources is not the only consideration for Liu and his colleagues when it comes to equity.

“In the past, camera-based solutions were skewed towards lighter skin tones, and did not function well with darker skin tones,” Liu noted. “To be truly useful, particularly to populations who are already underserved in health care, our application has to function accurately across the full range of skin tones, and under a variety of conditions.”

In 2020, Liu and his fellow researchers proposed the first on-device neural network for non-contact physiological sensing. This novel multi-task temporal shift convolutional attention network (MTTS-CAN) addressed the challenges of privacy, portability and precision in contactless cardiopulmonary measurement. Their paper, which was among the top 1% of submissions to the 34th conference on Neural Information Processing Systems (NeurIPS), was foundational in that it allowed for health sensing on devices with lower processing power. Following this, Liu conceived an even faster neural architecture called EfficientPhys for lower-end mobile devices, which will appear at the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). 

Last year, Liu proposed an unsupervised few-shot learning algorithm called MetaPhys, presented at the Conference on Health, Inference, and Learning (CHIL), and a mobile-camera based few-shot learning mobile system called MobilePhys, which appeared in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), as steps toward addressing remote sensing’s shortcomings with regard to variations in patients’ skin tones, activities and environmental contexts. He has been involved with Microsoft Research’s efforts to use synthetic avatars to simulate facial blood flow changes and to systematically generate a large-scale video-based dataset under a wide variety of conditions such as different skin tones and backgrounds. This work has led to the production of a synthetic data set that offers labels with precise synchronization and without noise to overcome issues with variability and diversity. 

Having proven the concept, Liu has turned his attention to ensuring such tools will be reliable and efficient for diverse populations under real-world conditions. Whereas previous research on non-contact camera-based physiological monitoring has focused on healthy populations, Liu has initiated a collaboration with Dr. Eugene Yang, clinical professor and director of the Eastside Specialty Clinic at UW Medicine, to collect data in in-patient and out-patient clinical settings. Their goal is to validate the team’s machine learning augmented, camera-based approach for obtaining accurate readings for a range of health indicators, such as heart rate and respiration rate, in real-world clinical settings. Ultimately, Liu aims to push the boundaries of non-contact physiological measurement through exploring contactless camera sensing to measure such readings as blood pressure and arrhythmias.

“Xin takes a collaborative and interdisciplinary approach to his research,” said Patel, Liu’s Ph.D. advisor. “He works closely with his clinical partners to inform his research and executes his research with the highest ethical and equitable standards. He has taken on some difficult research challenges around skin tone diversity for health-related AI models that are already having industry impact.” 

Liu earned a 2022 Google Fellowship in Health Research and Artificial Intelligence for Social Good and Algorithmic Fairness to advance this work and foster the completion of his dissertation work. 

Congratulations Xin! Read more →

Lost in translation no more: IBM Fellowship winner Akari Asai asks — and answers — big questions in NLP to expand information access to all

Portrait of Akari Asai wearing grey floral lace top with black trim and dangling earrings against a grey background

Growing up in Japan, Akari Asai never imagined that she would one day pursue a Ph.D. at the Allen School focused on developing the next generation of natural language processing tools. Asai hadn’t taken a single computing class before her arrival at the University of Tokyo, where she enrolled in economics and business courses; her first foray into computer science would come thousands of miles from home, while studying abroad at the University of California, Berkeley. The experience would alter the trajectory of her academic career and put her on a path to solving problems on a global scale.

“I changed my major in the middle of my undergraduate studies, and I wished I had discovered computer science and opportunities for pursuing my career abroad earlier,” said Asai. “My own situation made me realize the importance of information access for everyone.”

That realization led Asai to pursue her Ph.D. at the University of Washington, where she is now in the business of developing next-generation AI algorithms that offer rich natural language comprehension using multi-lingual, multi-hop and interpretable reasoning working with Allen School professor Hannaneh Hajishirzi in the H2Lab.

“Akari is very insightful and cares deeply about the impact of her work,” observed Hajishirzi, who is also senior research manager in the Allen Institute for AI’s AllenNLP group. “She is bridging the gap between research and real-world applications by making NLP models more efficient, more effective, and more inclusive by extending their benefits to languages other than English that have been largely ignored.”

More than 7,100 languages are spoken in the world today. While English is the most prevalent, spoken by nearly 1.5 billion people, the global population is nearing 8 billion — meaning a significant proportion is excluded from the benefits of today’s powerful NLP models. Asai is trying to close this gap by enabling universal question answering systems that can read and retrieve information across multiple languages. For example, she and her collaborators introduced XOR-TyDi QA, the first large-scale annotated dataset capable of open-ended information retrieval across seven different languages other than English. The approach — XOR QA stands for Cross-lingual Open Retrieval Question Answering — enables questions written in one language to be answered using content expressed in another. 

Asai also contributed to CORA, the first unified multilingual retriever-generator framework that can answer questions across many languages — even in the absence of language-specific annotated data or knowledge sources. CORA, short for Cross-lingual Open-Retrieval Answer Generation, employs a dense passage retrieval algorithm to pull information from Wikipedia entries, irrespective of language boundaries; the system relies on a multilingual autoregressive generation model to answer questions in the target language without the need for translations. The team incorporated an iterative training method that automatically extends the annotated data previously only available in high-resource languages to low-resource ones. 

“We demonstrated that CORA is capable of answering questions across 28 typologically different languages, achieving state-of-the-art results on 26 of them,” Asai explained. “Those results include languages that are more distant from English and for which there is limited training data, such as Hebrew and Malay.”

Language is not the only barrier Asai is working to overcome. The massive computational resources required to operate the latest, greatest language models, which few groups can afford, also puts them out of reach for many. Asai is making strides on this problem, too, recently unveiling a new multi-task learning paradigm for tuning large-scale language models that is modular, interpretable and parameter-efficient. In a preprint, Asai and her collaborators explained how ATTEMPT, or Attentional Mixture of Prompt Tuning, meets or exceeds the performance of full fine-tuning approaches while updating less than one percent of the parameters required by those other methods.

Asai is also keenly interested in the development of neuro-symbolic algorithms that are imbued with the ability to deal with complex questions. One example is PathRetriever, a graph-based recurrent retrieval method that learns to retrieve reasoning paths over the Wikipedia graph to answer multi-hop open-domain questions at web scale. By leveraging a reading comprehension model alongside the retriever model, Asai and her colleagues enabled PathRetriever to explore more accurate reasoning paths in answer to complex questions compared to other methods. Some of her co-authors subsequently adapted the system to enable complex queries of scientific publications related to COVID-19. 

Ultimately, Asai intends to integrate the various facets of her research into a general-purpose, lightweight retriever and neuro-symbolic generator that will be capable of performing complex reasoning over diverse inputs while overcoming data scarcity. Having earned a 2022 IBM Ph.D. Fellowship earlier this year to advance this work, Asai’s ambition is to eliminate the disparity between the information “haves” and “have nots” by providing tools that will empower anyone to quickly and easily find what they need online — in multiple languages as well as multiple domains.

“Despite rapid progress in NLP, there are still several major limitations that prevent too many people from enjoying the benefits of that progress,” she explained. “My long-term research goal is to develop AI agents that can interact with broad swaths of internet users to answer their questions, giving everyone equal access to information that might otherwise be limited to certain default audiences.”

Her commitment to promoting equal access extends beyond information retrieval to include the field of NLP itself; to that end, Asai is an enthusiastic mentor to students from underrepresented backgrounds.

“I’m excited to continue making progress on my own research interests,” said Asai, “but I hope to also inspire the next generation of researchers in AI.”

Way to go, Akari! Read more →

Jeffrey Heer wins Test of Time Award at IEEE VIS for helping the visualization community better understand challenges facing data analysts

Jeffrey Heer seated at a wooden desk with his hands folded in front of an open laptop, which is positioned in front of two large computer monitors — one dark, one showing various graphs onscreen. A telephone is to the left of the monitors, and a textbook is on the desk. Part of a window and guest sofa are visible in the background.

Allen School professor Jeffrey Heer received a Test of Time Award at the 2022 IEEE Visualization & Visual Analytics Conference this week, marking the third consecutive year that his work has been recognized with the honor. 

Heer, the co-director of the Interactive Data Lab, co-authored the winning paper, “Enterprise Data Analysis and Visualization: An Interview Study,” which provided key insights into understanding how data analysts operate and the challenges they encounter in their workflows. The paper was published in IEEE Transactions on Visualization and Computer Graphics in 2012 when Heer and two of his co-authors were researchers at Stanford University. The team presented its findings at the IEEE Conference on Visual Analytics Science and Technology (VAST).

The Test of Time Award recognizes papers published at previous conferences that have had a lasting impact both within and beyond the field of visualization over the ensuing decade. 

“We’re thrilled that our colleagues still find our work relevant 10 years later,” said Heer, who holds the Jerre D. Noe Endowed Professorship at the Allen School. “While research ‘style’ probably plays a part, I think we were also in the right time and place as societal interest in data and analysis heated up. I also think it helps that our work has had interdisciplinary teams spanning human-computer interaction, data management, cognitive science and other disciplines, bringing more varied perspectives.” 

As the team members investigated tools to aid data analysis, they were particularly interested in what Heer referred to as “the messy early stages of data cleaning and preparation.” While they had hands-on experience themselves, they wanted to gain a deeper understanding of the different types of analysts, the issues they encountered and the infrastructure that supported them. 

To tackle this problem, the team launched a qualitative study, interviewing participants across a range of sectors, including health care, retail, social networking, finance, media, marketing and insurance. Among the insights they uncovered included analyst archetypes, how analysts collaborated with one another and with other business units and challenges they met in the analysis process. 

“The study helped prioritize our thinking around the varied backgrounds of people working with data, including different levels of statistical and software development expertise, and how organizational dynamics shapes how the work gets done,” Heer said. “Both are important to consider if new tools are going to be usable and effective in practice.”

Heer credited first author Sean Kandel, his former Ph.D. student at Stanford and fellow co-founder of Trifacta, with leading the project. Kandel was the chief technical officer of Trifacta, which develops data wrangling software and a popular data preparation platform used by Google, NASA and others. Trifacta was acquired by Alteryx in February. 

Co-author Andreas Paepcke is the director of data analytics and senior research scientist at Stanford. Co-author Joseph M. Hellerstein is the Jim Gray Professor of Computer Science at UC Berkeley and a co-founder of Trifacta. Hellerstein was at UC Berkeley at the time of the study. 

Heer previously earned Test of Time Awards at VIS for his work on data-driven documents and narrative visualization. Since joining the Allen School faculty in 2013, he has been recognized with the Association for Computing Machinery’s ACM Grace Murray Hopper Award, the IEEE Visualization Technical Achievement Award and Best Paper Awards at the ACM Conference on Human Factors in Computing (CHI), EuroVis and IEEE InfoVis conferences. Last year, Heer was elected to the CHI Academy by the ACM Special Interest Group on Computer-Human Interaction (SIGCHI) for his substantial contributions in the areas of data visualization, data wrangling, text analysis, language translation and interactive machine learning. 

Read Heer’s latest award-winning paper here and the award citation here

Congratulations to Jeff and his co-authors! Read more →

Making “magical concepts” real: Allen School professor Rachel Lin named one of Science News’ 10 Scientists to Watch

Portrait of Rachel Lin leaning against a metal railing in building atrium with concrete, wood and glass in the background

Science News has named professor Huijia (Rachel) Lin, a founding member of the Allen School’s Cryptography group, as one of its SN 10: Scientists to Watch. Each year, Science News recognizes 10 scientists who are making a mark in their respective fields while working to solve some of the world’s biggest problems. Lin earned her place on the 2022 list for achieving a breakthrough on what has been alternately referred to as the “holy grail” or “crown jewel” of cryptography by proving the security of indistinguishability obfuscation.

“I’m very attracted to these magical concepts,” Lin told Science News. “The fun of it is to make this concept come to realization.”

While Lin explores a variety of fundamental problems — from black-box constructions for securing multiparty computation to zero-knowledge proofs — her work on iO has been celebrated for answering an open question that had vexed cryptographers for more than 40 years: How to prove the security of this potentially powerful “master tool” for securing data and computer programs while maintaining their functionality and bring it into the mainstream. According to the article, previous attempts at proving iO were generally geared toward obtaining a result that would be deemed “good enough” — and one by one, those attempts would unravel under further scrutiny. 

Lin aimed for more than “good enough” by seeking a generalizable solution grounded in sound mathematical theory. Rather than approaching iO like “a bowl of spaghetti,” as she put it, Lin preferred to attack the problem by untangling it into its component parts, working alongside University of California, Los Angeles professor Amit Sahai and his then-Ph.D. student and NTT Research intern Aayush Jain, now a professor at Carnegie Mellon University. After two years, the team had a theoretical framework for provably secure iO that was based on a quartet of well-founded assumptions: Symmetric External Diffie-Hellman (SXDH) on pairing groups, Learning with Errors (LWE), Learning Parity with Noise (LPN) over large fields, and a structured-seed Boolean Pseudo-Random Generator (sPRG). The result was first reported in Quanta Magazine in the summer of 2020; Lin and her collaborators subsequently earned a Best Paper Award at the Association for Computing Machinery’s 53rd Symposium on Theory of Computing (STOC 2021) for their contribution — one that Lin is eager to see progress from theory to reality.

“These are ambitious goals that will need the joint effort from the entire cryptography community,” she observed at the time. “I look forward to working on these questions and being part of the effort.”

In other words, watch this space.

Lin is the second member of the Allen School’s Theory of Computation group to be recognized on the SN 10 list, after her colleague Shayan Oveis Gharan was highlighted in 2016.

Read Lin’s profile in Science News here, and the Quanta article on the i/O breakthrough here.

Photo: Dennis Wise/University of Washington Read more →

“The sky’s the limit”: Allen School launches new FOCI Center at the UW to shape the future of cloud computing

Seattle skyline viewed from a three-lane highway framed by street lamps, with vivid blue sky and fluffy clouds

The Allen School has established a new center at the University of Washington that aims to catalyze the next generation of cloud computing technology. The Center for the Future of Cloud Infrastructure, or FOCI, will cultivate stronger partnerships between academia and industry to enable cloud-based systems to reach new heights when it comes to security, reliability, performance, and sustainability.

“The first generation of the cloud disrupted conventional computing but focused on similar engineering abstractions, which is typical of many new technologies,” said Allen School professor Ratul Mahajan, co-director of the FOCI Center and, until recently, co-founder and CEO of cloud computing startup Intentionet. “Now that cloud computing is on the cusp of a more radical transformation, this center will help usher in a new era by cultivating tighter partnerships between researchers and practitioners to address emerging bottlenecks and explore new opportunities.”

That transformation is being driven in large part by the rise in machine learning, edge computing, 5G and other burgeoning technologies. According to Mahajan’s Allen School colleague and center co-director Simon Peter, the demands of these new workloads — including exponential growth in the energy required to power their applications — will require researchers to rethink the full computing stack from the ground up. 

“Companies and consumers are seeking ever-greater levels of security, reliability and performance in the cloud at a reduced cost,” Peter noted. “Not just monetary cost, but also in terms of cost to the environment. For a while, thanks to Moore’s Law, we were gaining ground when it comes to energy efficiency. But now the gains have slowed or even reversed; for example, in the U.S. the energy demand for computation is growing twice as fast as solar and wind power. So we need to think holistically about the hardware-software interface and how to make cloud computing sustainable as well as resilient and secure.”

One of the areas that Peter and his colleagues are keen to explore is energy-aware cloud computing, which would enable tradeoffs between power and performance while making cloud applications resilient to disruption. Another potential avenue of inquiry concerns how the development of systems to effectively manage the variety of hardware accelerators used in settings such as disaggregated storage and emerging machine learning applications while minimizing latency, ensuring fairness, and meeting multi-dimensional resource needs — among other challenges.

Portrait collage of Arvind Krishnamurthy, Ratul Mahajan, and Simon Peter, with UW's block "W" logo in the lower right corner against a purple background
The co-directors of the new FOCI Center at the UW, top, from left: Arvind Krishnamurthy and Ratul Mahajan; bottom left: Simon Peter

How the center approaches these challenges will be informed by a technical advisory board comprising representatives of cloud companies Alibaba, Cisco, Google, Microsoft and VMware — all significant movers and shakers in the cloud space. Their input will help guide the center’s research toward real-world impact based on current trends, what problems they anticipate over a five to 10- year time horizon, and how solutions might be applied in practice. Center researchers will apply these practical insights to their pursuit of big, open-ended ideas, drawing upon cross-campus expertise in systems, computer architecture, networking, machine learning, data science, security, and more.

“Industry knows the pain points and technology trends; academia is adept at the exploratory, collaborative work that’s fundamental to solving hard problems,” noted Allen School professor and center co-director Arvind Krishnamurthy, who also serves as an advisor to UW machine learning spinout OctoML. “By bringing the two together, this center will not only yield compelling solutions but also contribute to the education of students who will go on to build these next-generation systems.”

The FOCI Center was seeded with industry commitments totaling $3.75 million over three years. The Allen School is hosting a launch event on the UW campus in Seattle today to connect faculty and student researchers with industry leaders interested in shaping the future of cloud computing.

“Seattle is the cloud city, both in weather and as home to the largest cloud companies, so it was only natural to establish a center focused on cloud computing and leverage the synergies between the UW’s research expertise and our local industry leadership of this space,” said Magdalena Balazinska, professor and director of the Allen School. ”When it comes to what we can accomplish together, I would say the sky’s the limit.”

To learn more, visit the FOCI Center website and read the coverage by GeekWire here.

Main photo credit: University of Washington Read more →

“Go ahead and take that adventurous route”: Allen School professor Yejin Choi named 2022 MacArthur Fellow

Yejin Choi in a black leather jacket over a black shirt, leaning against a metal railing with a metal, wood and concrete stairwell in the background. A portion of wood-paneled wall is visible in the right of the frame.
Yejin Choi (John D. and Catherine T. MacArthur Foundation)

Yejin Choi, a professor in the Allen School’s Natural Language Processing group, was selected as a 2022 MacArthur Fellow by the John D. and Catherine T. MacArthur Foundation to advance her work “using natural language processing to develop artificial intelligence systems that can understand language and make inferences about the world.” The MacArthur Fellowship — also known as the “genius grant” — celebrates and invests in talented and creative individuals whose past achievements signify their potential to make important future contributions. Each recipient receives a stipend of $800,000 that comes with no strings attached.

“It’s been several weeks since I learned about this award, and it still feels so surreal,” Choi told UW News.

Choi, who joined the Allen School faculty in 2014, may feel like she is dreaming, but her work has had a very real impact. Currently the Brett Helsel Career Development Professor in the Allen School and senior research manager for the Mosaic team at the Allen Institute for AI, Choi has contributed to a series of high-profile projects that have expanded the capabilities of natural language models — and uncovered potential pitfalls. For example, she was among the first to bridge the fields of NLP and computer vision by teaching models to generate original and accurate image descriptions based on visual content in place of conventional statistical approaches. She has also contributed to a variety of tools for analyzing and combating the proliferation of bias and misinformation online, from  AI-generated “fake news” to trashy training inputs that lead to toxic language degeneration, along with new methods for assessing the quality of open-ended machine-generated text compared to that generated by humans.

Choi and her collaborators went a step further with the development of Ask Delphi, an experimental platform for exploring how machines might acquire and exercise moral judgment in response to real-world situations. Through this and other work, Choi is pushing the field closer to her overarching goal: to imbue machines with a human-like ability to reason and communicate about the world in both physical and abstract terms. Whatever comes next, Choi is determined to fulfill the spirit of the Fellowship by pursuing the most original and impactful research ideas — even when they are accompanied by a degree of risk.

“Taking the road less traveled may seem exciting at first, but sustaining this path can be lonely, riddled with numerous roadblocks and disheartening at times,” Choi said. “This fellowship will power me up to go ahead and take that adventurous route.”

Previous MacArthur Fellowship winners with an Allen School connection include Choi’s faculty colleague Shwetak Patel, alumni Stefan Savage (Ph.D., ‘02), a professor at the University of California San Diego, and Christopher Ré (Ph.D., ‘09), a professor at Stanford University, and former Allen School faculty member Yoky Matsuoka, currently founder and CEO of Yohana.

Read the UW News release here and check out Choi’s MacArthur Foundation profile here. Read the New York Times story here, GeekWire article here and Crosscut interview with Choi here.

Congratulations, Yejin! Read more →

We’re baaack…Students and companies descend upon the Allen School’s in-person career fairs

Student backpacks of various colors and styles piled on tables against a wall of glass windows and on the floor. A person wearing glasses and a shirt printed with the Allen School name and a stick-on name tag is crouched in the lower left corner of the photo, inserting papers into a blue plastic folder
A sea of student backpacks stashed outside the October 4 career fair

After several years of Covid-induced online career fairs, the Allen School returned to an in-person format this fall!

On October 4 and 6, more than 50 companies — members of the Allen School’s Industry Affiliates program — came to campus to recruit students for full-time, part-time, and internship positions. On each day, the first half of the session was devoted to Allen School students; UW students in related majors joined for the second half. In all, more than 1,000 students participated across the two days.

The fall career fairs are instrumental in connecting students with career opportunities in the local technology community. In 2021-22, the Allen School alone sent more than 60 graduates to full-time positions at Amazon, more than 50 to Google, and more than 40 each to Facebook and Microsoft. Allen School students accepted full-time positions at more than 100 tech companies in total — the vast majority in the Puget Sound region.

The Allen School will host a virtual career fair on October 12, followed by the in-person Data Science career fair, hosted by the Allen School in conjunction with the UW eScience Institute, on October 20. Read more →

A feature and a bug: Vikram Iyer earns SIGMOBILE Doctoral Dissertation Award for engineering systems inspired by nature

Vikram Iyer wearing glasses holding tweezers in his right hand, crouched beside a tree trunk with a large black beetle wearing a camera pack on its back crawling up the trunk

When bees leave the hive, they can spend all day flying and foraging on a single “charge” owing to their ability to convert fats and carbohydrates that store significantly more energy than batteries. When other insects traverse the landscape, the structure of their retinas combined with the motion of their heads enable them to efficiently take in and process visual information. And when dandelions shed their seeds, structural variations ensure that they are dispersed through the air over short and long distances to cover maximum ground. 

Allen School professor Vikram Iyer is not a biologist, but he takes inspiration from these and other biological phenomena to engineer programmable systems and devices that can go where computers have been unable to go before — and solve problems more efficiently and safely than previously thought possible. During his time as a University of Washington Ph.D. student, Iyer imagined how the so-called internet of biological and bio-inspired things could transform domains ranging from agriculture to wildlife conservation. His results recently inspired the Association for Computing Machinery’s Special Interest Group on Mobility of Systems, Users, Data, and Computing to recognize him with the SIGMOBILE Doctoral Dissertation Award for “creative and inspiring work that shows how low-power sensing, computing and communication technologies can be used to emulate naturally-occurring biological capabilities.”

“Building bio-inspired networking and sensing systems requires expertise across multiple disciplines spanning computer science, electrical engineering, mechanical engineering and biology,” said Allen School professor Shyam Gollakota, Iyer’s Ph.D. advisor. “I think this thesis breaks new ground by designing programmable technologies that not only mimic nature but also takes the crucial step of integrating electronics with living organisms.”

Gollakota and Iyer worked together on a series of projects that gave new meaning to the term “computer bug” — but in this case, they took their lessons from the kind of bugs with legs and wings. For one of their early projects, Living IoT, the team developed a scaled down wireless sensing and communication platform that was light enough to be worn by bumblebees in flight. The tiny sensor backpacks incorporate a rechargeable power source, localization hardware, and backscatter communication to relay data once the bee returns to the hive in a form factor topping out at a mere 102 micrograms. Later, when the northern giant hornet — colloquially referred to as the “murder hornet” — was sighted in northwest Washington, the state’s Department of Agriculture enlisted Iyer’s help in designing and affixing tiny tracking devices onto a live specimen so that agency staff could track it back to the nest. 

After seeing their concept of on-board sensors for insects take flight, Iyer and his colleagues came back down to earth to develop a tiny wireless camera inspired by insect vision. Their system, which they dubbed “Beetlecam,” offered a fully wireless, autonomously powered, mechanically steerable vision system that imitates the head motion of insects. By affixing the camera onto a moveable mechanical arm, the team could mimic insect head motion to capture a wide-angle view of the scene and track the movement of objects while expending less energy — and at a higher resolution. The complete vision system, which can be controlled via smartphone, is small enough to mount on the back of a live beetle or insect-sized terrestrial robots such as their own prototype built to demonstrate the capabilities.

Many sensors, including those designed by Iyer and his collaborators, still require a method of transportation, be it beetle, bee, or robot. Iyer and his collaborators wondered if they could design sensors capable of delivering themselves. The answer, to borrow a phrase from singer/songwriter Bob Dylan, was blowing in the wind — in the form of dandelion seeds. Iyer and the team developed a wireless, solar-powered sensing and communication system that can be carried aboard flexible, thin shapes. The shapes are designed to carry the sensors through air and land upright 95% of the time, relying on a structure reminiscent of the bristle-like shape of dandelion seeds — with some necessary modifications to accommodate the weight of the attached sensor. The team also demonstrated that, by modulating the porosity and diameter of the structures, they can ensure the sensors are dispersed at various distances like the seeds.

Unlike many miniaturized systems, Iyer’s flora- and fauna-inspired projects favor designs that rely on off-the-shelf parts instead of requiring custom-built circuits.

“In addition to showing how we can take lessons from nature to advance a new category of bioinspired computing, my work demonstrates how we can use programmable general-purpose components to rapidly develop these novel miniaturized wireless systems,” Iyer explained. “This approach has the potential to exponentially increase innovation in domains such as smart agriculture, biological tracking, microrobots, and implantable devices. My goal is to enable anyone with a computer engineering background to advance miniaturized systems without the need to also develop custom silicon.”

Iyer, who earned his Ph.D. from the UW Department of Electrical & Computer Engineering before joining the Allen School faculty last year, previously earned a Paul Baran Young Scholar Award from the Marconi Society and his work was voted Innovation of the Year in the 2021 GeekWire Awards. He is the third student researcher advised by Gollakota to win this award in recent years, following in the footsteps of Allen School alum Rajalakshmi Nandakumar (Ph.D., ‘20), now a faculty member at Cornell University, and ECE alum Vamsi Talla (Ph.D., ‘16), who was co-advised by Allen School and ECE professor Joshua Smith and is currently CTO of UW spinout Jeeva Wireless.

Congratulations, Vikram!

Photo credit: Mark Stone/University of Washington Read more →

People power: Maya Cakmak earns Anita Borg Early Career Award for advancing innovation and broadening participation in human-centered robotics

Maya Cakmak stands smiling wearing a dark-colored short-sleeved shirt and small pendant necklace with hair pulled back, next to a silver-toned wall plaque etched with an image of Anita Borg smiling and resting her chin on overlapping hands and readable text: "Anita Borg (1949 - 2003) Anita Borg combined technical expertise and fearless vision to inspire, motivate, and move women to embrace technology" accompanied by three paragraphs of smaller text with biographical information.
Maya Cakmak beside the plaque dedicated to computer scientist and visionary Anita Borg in the Bill & Melinda Gates Center for Computer Science & Engineering on the University of Washington’s Seattle campus. Borg began her undergraduate education at the UW. Photo courtesy of Maya Cakmak

For Allen School professor Maya Cakmak, the future of robotics hinges on the human element. Since the early days of her research career, Cakmak has been leveraging advances in human-computer interaction and accessibility to shift robotics research from primarily technology-centric approaches toward a more user-centric approach. She is also known for putting people first through her support for programs and policies aimed at increasing participation in computing by women and people with disabilities. For her efforts, the Computing Research Association’s Committee on Widening Participation in Computing Research (CRA-WP) recently recognized Cakmak with its 2022 Anita Borg Early Career Award — named in honor of another woman in computing who wasn’t afraid to break new ground and lead by example.

Cakmak holds the Robert E. Dinning Career Development Professorship in the Allen School, where she directs the Human-Centered Robotics Lab. There, she and her collaborators develop robots that can be programmed by users with diverse needs and preferences to assist with everyday tasks — in her own words, “empowering users to decide what their robots will do for them.” 

Even before she joined the University of Washington faculty in 2013, Cakmak had already begun building a reputation in robotics circles for her human-centric approach. As a Ph.D. student at Georgia Tech, Cakmak showed that many data-driven methods for programming by demonstration were ill-suited to novice users, proposing instead to employ interaction-driven approaches that incorporate user studies — a novel idea at the time that has since entered the mainstream. She also was an early evangelist of active learning in robotics, which enables robots to acquire new knowledge and skills via queries instead of just passively receiving data from humans.

Maya Cakmak standing onstage behind a solid wood podium with assortment of potted foliage in front, talking into a microphone. There is a screen behind her with a PowerPoint slide titled "Training Time" with assorted images and charts, flanked by two signs with text "Robotics Science and Systems" and multiple company logos underneath.
“I deeply care about the relevance and usefulness of my research.” Cakmak presenting at the Robotics: Science and Systems conference in 2018. Anca Dragan

After her arrival at the Allen School in 2013, Cakmak sought to expand end-user programming capabilities for robots beyond conventional programming by demonstration by drawing upon techniques from HCI. She also pursued new connections between robotics and programming languages, uniting two of the Allen School’s strengths to usher in a new and growing area of interdisciplinary research. For example, Cakmak and Ph.D. student Sonya Alexandrova teamed up with faculty colleague Zachary Tatlock in the Programming Languages & Software Engineering (PLSE) group on RoboFlow, an intuitive visual programming language to enable users to program robots to perform mobile manipulation tasks in various real-world environments. She also worked with then-student Justin Huang (Ph.D., ‘18) and Allen School alum Tessa Lau (Ph.D., ‘01), who at the time served as CTO and “Chief Robot Whisperer” at Savioke, Inc., to create a rapid programming system for mobile service robots. That system, which the team dubbed CustomPrograms, featured a cloud-based graphical interface to support the rapid prototyping and development of custom applications by experts as well as inexperienced programmers. 

Cakmak and Huang followed that up with Code3, a suite of user-friendly tools for perception, manipulation and high-level programming that enables rapid, end-to-end programming of robots such as the PR2 and Fetch to perform mobile manipulation tasks. As with many of her projects, Code3 was designed to appeal to experts and novice users alike. In 2016, Cakmak received a CAREER Award from the National Science Foundation for her project titled “End-user programming of general-purpose robots” to continue this line of research.

According to her Allen School robotics colleague Dieter Fox, Cakmak’s human-centric approach has been hugely influential within the robotics community.

”Human-robot interaction has grown tremendously over the past decade, and Maya has been at the forefront of the field,” said Fox, director of the UW’s Robotics and State Estimation Lab and senior director of robotics research at NVIDIA. “Her work has also had real-world impact through her collaborations with multiple robotics companies that ship their robots with end-user tools that she developed.”

Fox knows all about the potential impact of such academic-industry collaboration. He and his colleagues at NVIDIA recently teamed up with Cakmak to develop new capabilities in human-robot handovers — an essential skill for robots to safely and reliably assist humans with everyday tasks. The team’s system, which demonstrated smooth human-to-robot handover of arbitrary objects commonly found in people’s homes, earned the Best Paper Award in HRI at the International Conference on Robotics and Automation (ICRA 2021). Cakmak also collaborated with Anat Caspi, director of the Allen School’s Taskar Center for Accessible Technology, and researchers in the Personal Robotics Lab led by professor Sidd Srinivasa to explore user preferences and community-centered design principles to inform the development of robot-assisted feeding systems for people with mobility impairments. 

Maya Cakmak stands smiling at a girl who appears to be middle-school age. The girl is smiling back while holding her right hand up towards a PR2 robot, which has its right hand raised, as a group of other girls sitting cross-legged on the floor looks on. There are desks with large computer monitors on them lining the wall behind them.
Cakmak (right) introduces participants in the Allen School’s K-12 outreach program to Rosie the robot and programming by demonstration. Photo courtesy of Maya Cakmak

Whatever problem she aims to solve, Cakmak has been eager to take her cue from the people who have the potential to benefit most from her work.

“I deeply care about the relevance and usefulness of my research,” Cakmak explained in her Early Career Spotlight talk at the 2018 Robotics: Science & Systems (RSS) conference. “To that end, I try to evaluate systems I develop with a realistic and diverse set of tasks, I put these systems in front of real potential users with diverse backgrounds and abilities, and I take every opportunity to demonstrate and deploy them in the real world.”

Cakmak’s contributions to accessibility go far beyond her projects and papers; she is also a vocal proponent of making the act of research — and the conferences and other events where research careers are made — more inclusive of people with disabilities and women. This includes making it easier for women who are new mothers to participate, inspired by her personal experience. In one specific example, Cakmak successfully lobbied for changes in one organization’s reimbursement policies to cover childcare expenses for invited speakers. 

“When I had my first child, I received a transitional grant from the UW ADVANCE Center for Institutional Change to cover expenses of taking my infant along for work travel,” Cakmak said. “Continuing to go to conferences was critical for me to stay active in my research community and help my graduate students network. But it did more than that. Colleagues would often approach me, not just to play with the baby but to ask questions about the logistics of taking your baby to a conference or how to manage starting a family while being on tenure-track. Many were amazed to hear about the ADVANCE grant and went back to ask for similar initiatives at their institutions.

“In an academic career system where young parents are disadvantaged, we are learning how to make it work from one another, while also pushing for positive institutional change,” she continued. “This is why representation matters so much. We owe many privileges we now take for granted to the hard work of those who were disadvantaged back in the day.”

Back on campus, Cakmak served as a co-principal investigator on the National Science Foundation-funded AccessEngineering initiative to incorporate topics such as universal design into engineering courses and to make labs and maker spaces accessible to students with diverse abilities. In an effort to engage more people with disabilities in computing and, in particular, robotics, Cakmak designed and taught a course for high school students over multiple summers as part of the UW DO-IT Center’s Scholars program. The course provided participants with hands-on experience in programming robots while encouraging them to think about how the field can help solve real-world problems. She developed a similar workshop for AccessComputing’s OurCS@UW program for undergraduate women with disabilities. Cakmak’s contributions inspired the DO-IT Center to recognize her with its 2021 Trailblazer Award

Maya Cakmak in profile, smiling and holding a baby facing away from the camera, with a blurred Allen Center atrium at night in the background.
Cakmak has been a vocal proponent of policies that enable young parents to continue to attend research conferences and advance their careers. Aditya Mandalika

In addition to these activities, Cakmak developed a seminar to provide second-year students with the foundational skills to participate in research. She also served as the UW faculty representative for the fourth cohort of the LEAP Alliance, a program that seeks to diversify the professoriate by supporting underrepresented students in pursuing academic careers. She also organized a rejuvenated RSS Women in Robotics Workshop, including raising funds for travel grants that enabled women roboticists from around the world to participate, and later supported its expansion to include researchers from other underrepresented groups.

“Maya is a stellar researcher who is a leader in her field of research, and she has dedicated an immense amount of time and effort to broadening participation by women and students with disabilities — directly and indirectly impacting many students,” Fox said. “I cannot imagine a more deserving recipient of the Anita Borg Award.”

The award includes a $5,000 prize, which Cakmak donated to the Allen School to support more women to pursue undergraduate research.

“This award means so much to me because it recognizes things I did in my career beyond my research that I did not expect to be recognized for,” Cakmak said. “I have been so fortunate to have many amazing role models, mentors, advocates, and supporters; I was just trying to pay it forward. I am honored especially because I have been so inspired by Anita Borg.”

Indeed, Cakmak’s work embodies Borg’s famous quote: “If we want technology to serve society rather than enslave it, we have to build systems accessible to all people — be they male or female, young, old, disabled, computer wizards or technophobes.”

Cakmak is the second Allen School faculty member to earn the Borg Early Career Award in the past five years, following professor Yejin Choi’s recognition in 2018. Allen School alumni Martha Kim (Ph.D., ’08), a faculty member at Columbia University; A.J. Bernheim Brush (Ph.D., ’02), Partner Group Program Manager at Microsoft; and Gail Murphy (Ph.D., ’96), Vice President of Research and Innovation and a faculty member at the University of British Columbia, are also past recipients of the award.

Congratulations, Maya! Read more →

Allen School researchers bring first underwater messaging app to smartphones

Two people in t-shirts and swimming trunks underwater in a tank holding smartphones in flexible waterproof cases. One of the smartphone screens is visible, displaying the AquaApp interface with text and graphics depicting various diving hand signals.

For millions of people who participate in activities such as snorkeling and scuba diving each year, hand signals are the only option for communicating safety and directional information underwater. While recreational divers may employ around 20 signals, professional divers’ vocabulary can exceed 200 signals on topics ranging from oxygen level, to the proximity of aquatic species, to the performance of cooperative tasks.

The visual nature of these hand signals limits their effectiveness at distance and in low visibility. Two-way text messaging is a potential alternative, but one that requires expensive custom hardware that is not widely available. 

Researchers at the University of Washington show how to achieve underwater messaging on billions of existing smartphones and smartwatches using only software. The team developed AquaApp, the first mobile app for acoustic-based communication and networking underwater that can be used with existing devices such as smartphones and smartwatches.

“Smartphones rely on radio signals like WiFi and Bluetooth for wireless communication. Those don’t propagate well underwater, but acoustic signals do,” said co-lead author Tuochao Chen, a Ph.D. student in the Allen School. “With AquaApp, we demonstrate underwater messaging using the speaker and microphone widely available on smartphones and watches. Other than downloading an app to their phone, the only thing people will need is a waterproof phone case rated for the depth of their dive.”

The AquaApp interface enables users to select from a list of 240 pre-set messages that correspond to hand signals employed by professional divers, with the 20 most common signals prominently displayed for easy access. Users can also filter messages according to eight categories, including directional indicators, environmental factors, and equipment status. 

In building the app, Chen and his collaborators — co-lead author and fellow Ph.D. student Justin Chan and professor Shyam Gollakota — had to overcome a variety of technical challenges that they haven’t previously encountered on dry land.

“The underwater scenario surfaces new problems compared to applications over the air,” explained Chan. “For example, fluctuations in signal strength are aggravated due to reflections from the surface, floor and coastline. Motion caused by nearby humans, waves and objects can interfere with data transmission. Further, microphones and speakers have different characteristics across smartphone models. We had to adapt in real time to these and other factors to ensure AquaApp would work under real-world conditions.”

Those other factors include the tendency for devices to rapidly shift position and proximity in the current and the various noise profiles the app might encounter due to the presence of vessels, animals, and even low-flying aircraft. 

The team created an algorithm that allows AquaApp to optimize, in real time, the bitrate and acoustic frequencies of each transmission based on certain parameters, including distance, noise and variations in frequency response across devices. When one user wants to send a message to another device, their app first sends a quick note, called a preamble, to the other device. AquaApp on the second device runs the algorithm to determine the best conditions to receive the preamble; it then tells the first device to use those same conditions to send the actual message.

A person wearing a t-shirt and swim trunks holds a smartphone in flexible waterproof case underwater. The camera is focused on a close-up of the phone's screen showing the AquaApp interface with graphics depicting eight diving hand signals and a text exchange: "Which way?" "Turn around." "How much air?" "Out of air."

The researchers developed a networking protocol to share access to the underwater network, akin to how WiFi networks referee internet traffic, to support messaging between multiple devices. AquaApp can accommodate up to 60 unique users on its local network at one time. 

The team tested the real-world utility of the AquaApp system in half a dozen locations offering a variety of water conditions and activity levels, including under a bridge in calm water, at a popular waterfront park with strong currents, next to the fishing dock of a busy lake, and in a bay with strong waves. In a series of experiments, they evaluated AquaApp’s performance at distances of up to 113 meters and depths of up to 12 meters.

“Based on our experiments, up to 30 meters is the ideal range for sending and receiving messages underwater, and 100 meters for transmitting SoS beacons,” Chen said. “These capabilities should be sufficient for most recreational and professional scenarios.” 

The researchers also measured AquaApp’s impact on battery life by continuously running the system on two Samsung Galaxy S9 smartphones at maximum volume and with screens activated. The app reduced the devices’ battery power by just 32% over the course of four hours, which is within the maximum recommended dive time for recreational scuba diving.

“AquaApp brings underwater communication to the masses,” said Gollakota, who directs the Mobile Intelligence Lab and holds the Torode Family Career Development Professorship in the Allen School. “The state of underwater networking today is similar to ARPANET, the precursor of the internet, in the 1970s, where only a select few had access to the internet. AquaApp has the potential to change that status quo by democratizing underwater technology and making it as easy as downloading software on your smartphone.”

Gollakota and his co-authors presented their paper describing AquaApp last week at SIGCOMM 2022, the flagship peer-reviewed conference of the Association for Computing Machinery’s Special Interest Group on Data Communications. The team’s data and open-source Android code are available on the AquaApp website. Watch a video demonstrating AquaApp here.

The researchers are supported by the Moore Inventor Fellowship award #10617 and the National Science Foundation.

Photo credits: University of Washington. Sarah McQuate in the UW News Office contributed to this story. Read more →

« Newer PostsOlder Posts »