Skip to main content

A picture of health: Google Fellowship recipient Xin Liu combines machine learning and mobile sensing through an equity lens to support remote health assessment

Portrait of Xin Liu in dark blue button-up shirt and glasses standing outdoors with fall foliage and buildings blurred in the background.

When COVID consigned doctor-patient interactions from the clinic to a computer screen, Allen School Ph.D. candidate Xin Liu already had his finger on the pulse of that paradigm shift. Since his arrival at the University of Washington in 2018, Liu has worked with professor Shwetak Patel in the UbiComp Lab to combine mobile sensing and machine learning to real-world problems in health care, with a focus on developing non-contact, camera-based physiological screening and monitoring solutions that are accessible by all and adaptable to wide range of settings. His goal is to “democratize camera-based non-contact health sensing for everyone around the world by making it accessible, equitable, and useful.”

Liu embarked on his undergraduate education in the US as an international student realizing how difficult it was to integrate into a new culture. As an undergraduate student at UMass Amherst, he became the first international student consultant and peer-mentor. He also encouraged other international students to embrace leadership roles. Liu recounted that, “this experience motivated my research in computer science and health where I aimed to develop useful and accessible computing technologies for diverse populations.”

Liu’s research would take on new meaning and urgency as the pandemic upended modern social interactions. As remote clinical visits increased significantly during this time, the need for remote ways of sensing and monitoring heart health became critical for medical practitioners and patients alike. Liu’s work combining camera-based physiological sensing with machine learning algorithms could offer new possibilities in early detection of heart-related health issues as well as allow for much-needed remote diagnostics when a patient faces barriers to obtaining in-person care at a clinic — even outside of a pandemic. 

As Liu saw it, there were several major issues that needed to be considered, and, in some cases, corrected in order for camera-based health assessment to be widely applicable. From a practical standpoint, the tool would have to obtain a high level of accuracy for medical practitioners to evaluate patients’ vital signs remotely in order to make informed clinical decisions. Privacy is another major critical concern when it comes to people’s personally identifiable medical information; because of this, any collected data would need to be held locally on the device.

That device could come with varying capabilities — or lack thereof. 

“Since people have access to a wide range of devices, the application has to function on even the most rudimentary smartphone,” Liu explained. “Likewise, people in resource-constrained settings may not consistently have connectivity, so the application would need to be capable of running without being connected to a network.”

Disparate access to resources is not the only consideration for Liu and his colleagues when it comes to equity.

“In the past, camera-based solutions were skewed towards lighter skin tones, and did not function well with darker skin tones,” Liu noted. “To be truly useful, particularly to populations who are already underserved in health care, our application has to function accurately across the full range of skin tones, and under a variety of conditions.”

In 2020, Liu and his fellow researchers proposed the first on-device neural network for non-contact physiological sensing. This novel multi-task temporal shift convolutional attention network (MTTS-CAN) addressed the challenges of privacy, portability and precision in contactless cardiopulmonary measurement. Their paper, which was among the top 1% of submissions to the 34th conference on Neural Information Processing Systems (NeurIPS), was foundational in that it allowed for health sensing on devices with lower processing power. Following this, Liu conceived an even faster neural architecture called EfficientPhys for lower-end mobile devices, which will appear at the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). 

Last year, Liu proposed an unsupervised few-shot learning algorithm called MetaPhys, presented at the Conference on Health, Inference, and Learning (CHIL), and a mobile-camera based few-shot learning mobile system called MobilePhys, which appeared in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), as steps toward addressing remote sensing’s shortcomings with regard to variations in patients’ skin tones, activities and environmental contexts. He has been involved with Microsoft Research’s efforts to use synthetic avatars to simulate facial blood flow changes and to systematically generate a large-scale video-based dataset under a wide variety of conditions such as different skin tones and backgrounds. This work has led to the production of a synthetic data set that offers labels with precise synchronization and without noise to overcome issues with variability and diversity. 

Having proven the concept, Liu has turned his attention to ensuring such tools will be reliable and efficient for diverse populations under real-world conditions. Whereas previous research on non-contact camera-based physiological monitoring has focused on healthy populations, Liu has initiated a collaboration with Dr. Eugene Yang, clinical professor and director of the Eastside Specialty Clinic at UW Medicine, to collect data in in-patient and out-patient clinical settings. Their goal is to validate the team’s machine learning augmented, camera-based approach for obtaining accurate readings for a range of health indicators, such as heart rate and respiration rate, in real-world clinical settings. Ultimately, Liu aims to push the boundaries of non-contact physiological measurement through exploring contactless camera sensing to measure such readings as blood pressure and arrhythmias.

“Xin takes a collaborative and interdisciplinary approach to his research,” said Patel, Liu’s Ph.D. advisor. “He works closely with his clinical partners to inform his research and executes his research with the highest ethical and equitable standards. He has taken on some difficult research challenges around skin tone diversity for health-related AI models that are already having industry impact.” 

Liu earned a 2022 Google Fellowship in Health Research and Artificial Intelligence for Social Good and Algorithmic Fairness to advance this work and foster the completion of his dissertation work. 

Congratulations Xin!