Skip to main content

Allen School undergraduates recognized by the Computing Research Association for advancing health sensing, programming languages and systems research

Computing Research Association logo

The Allen School has a proud tradition of nurturing undergraduate student researchers whose work has the potential for real-world impact. This year, three of those students — Jerry Cao, Mike He and Yu Xin — earned honorable mentions from the Computing Research Association (CRA) as part of its 2022 Outstanding Undergraduate Researcher Awards competition for their contributions in health sensing and fabrication, programming languages and machine learning, and building robust computer systems.

Jerry Cao

Jerry Cao

The CRA recognized senior Jerry Cao, who is majoring in computer science and applied mathematics, for his research in health sensing and fabrication. Advised by professors Jennifer Mankoff and Shwetak Patel, his work aims to apply computing and fabrication to improve individuals’ quality of life. To reduce the burden of health monitoring and make it easier for users to prototype custom tools that fit their personalized needs, Cao is creating a wearable device in compression sleeve-form for the leg that records changes in the blood volume in the body’s superficial tissue. This can help predict the onset of adverse symptoms throughout the day for conditions such as Postural Orthostatic Tachycardia Syndrome (POTS) where blood flow is improperly regulated throughout the body.

Cao is also working on a project to rapidly prototype physical objects. He aims to reduce the number of iterations — currently requiring several to reach the final product — by reconfiguring a model to support real-time iteration. He is developing a pipeline to take a parametric model and produce a reconfigurable prototype where each parameter can be adjusted up to a specified and allowed range. Users can more easily change the size of the physical model this way and record all the necessary measurements to fabricate a final version. For example, when building a cabinet, builders must ensure it fits in its designated space. The reconfigurable prototype will limit the iterations and allow users to explore different configurations of the object, then create the final version using actual materials.

Mike He

Mike He

Mike He, a senior studying computer science, was acknowledged for his work in programming languages, formal verification, compilers and machine learning systems. Advised by professor Zachary Tatlock, He worked with the Allen School’s PLSE group on Dynamic Tensor Rematerialization (DTR), an algorithm that trains deep learning models under constrained memory budgets. Since deep learning models use up a lot of GPU memory while training, He and his colleagues created DTR to train these models under restricted memory budgets. DTR removes restrictions on classic compilers and when memory fills up, DTR evicts the oldest, stalest, cheapest-to-recompute tensor to make room for the next allocation. If the training loop later tries to access a previously evicted tensor, DTR recomputes it on demand by tracking operator dependencies. 

In addition to his contributions to DTR, He led the push to develop new flexible accelerator matching compiler techniques to easily target new hardware accelerators in deep learning frameworks. To do so, the team is enabling devices to be more easily incorporated into an existing DL framework and, in principle, for formal functional verification down to the hardware implementation. The project, 3LA, has a built-in pattern-matching algorithm that can find accelerator supported workloads in deep learning models using equality saturation. The project addressed the mapping gap between deep learning models represented in high-level domain-specific languages and specialized accelerators using instruction-level abstraction as the software-hardware interface.

Yu Xin 

Yu Xin

Yu Xin, a senior studying computer science and applied and computational mathematical science, was honored by the CRA for his work with Allen School professor Arvind Krishnamurthy in building effective and robust computer systems. In particular, Xin worked to develop a scheduler for serving deep learning inference tasks. Applications using cloud-based deep learning models, when deployed on a large scale, tend to flood data center GPU clusters, slowing down the time it takes to respond and causing delays and extra expense. To help with the cost and speed, Xin and his collaborators created Symphony, a centralized dispatcher to satisfy requests within a latency bound, ensuring load-balance across GPUs and maximizing their efficiency by using appropriate dynamically-sized batches of inference requests. By loading dozens of deep learning models on each GPU, Symphony enables burst amortization across models and has the potential to eliminate the need for overprovisioning. Enabling multiple dispatchers for better scalability, Xin designed an algorithm to partition the model space into many disjoint subsets in which each dispatcher handles one of the models. The algorithm finds the partitioning scheme that minimizes the deviation between partitions in terms of total request rates and model sizes by generating and solving a Mixed Integer Linear Programming (MILP) problem.

Xin’s previous work includes developing tools for analyzing images of proteins generated from a cryo-electron microscope. For example, filtering out high-frequency noises by generating an artificial image based on a mathematical model and comparing it against every patch of the image to see if there is a match and then output all the matched results. This approach saves researchers time while increasing their effectiveness by directing their attention to the most relevant sites.

Congratulations to Jerry, Mike and Yu!