Skip to main content

With Virtual Chinrest, Allen School researchers aim to make online behavioral research less WEIRD

Knowing the distance between the center of display and the entry point of the blind spot area (s), and given that α is always around 13.5 degrees, the authors can calculate the viewing distance (d) as part of the Virtual Chinrest.

Behavioral studies in labs on university campuses are overwhelmed with participants who are WEIRD: western, educated, and from industrialized, rich and democratic countries. They are usually college students participating in the studies for class credit. 

In an effort to expand these studies to non-WEIRD people too, virtual labs like LabintheWild and Amazon’s Mechanical Turk, open up the study to anyone with access to the internet. These labs give researchers a broader glimpse at the way people think and behave from young to old, around the globe, with diverse cultural beliefs and geographical locations. But, researchers still hesitate to rely too much on these virtual labs because they need a more controlled environment.

Allen School researchers have come up with a tool to allow for more control. The Virtual Chinrest “enables remote, web-based psychophysical research at large scale, by accurately measuring a person’s viewing distance through a 30-second task,” according to the lead author and Allen School Ph.D. student, Qisheng Li.

She said that online studies in psychophysical experiments allow researchers to analyze human perception and performance. Study participants in labs often need to rest their chins in a certain location to control the exercise, making sure each performer is viewing the test from the same place. 

“We don’t know how far people in any online environment sit from their computers and we don’t know how big their display of the test is,” said Allen School professor Katharina Reinecke, co-founder of LabintheWild. “The virtual chinrest can monitor both the resolution on the screen and the physical distance from the monitor so researchers have more control over the online studies.”

To first calculate a participant’s display, participants are asked to place a credit card-sized card on the screen and adjust the slider on the screen to fit the credit card. That allows the researchers to calculate the pixel density on the monitor.

To measure the user’s distance from his or her monitor, there is also a blind spot task. Testers are asked to focus on a black square on the screen with their right eye closed, while a red dot repeatedly sweeps from right to left. They must hit the spacebar on their keyboards whenever it appears that the red dot has disappeared. That allows researchers to determine the distance between the center of the black square and the center of the red dot when it disappears from eyesight and understand how far the participant is from the monitor.

In an online test of the Virtual Chinrest on LabintheWild that included 1153 participants, Reinecke’s team was able to replicate and extend the results of previous in-lab studies to prove that the Virtual Chinrest can allow psychophysical studies to be done online, allowing for more diverse participant samples. 

Reinecke and her collaborators presented Virtual Chinrest in a recent paper published in the research journal Nature’s Scientific Reports. Additional authors include professor Sung Jun Joo of Pusan National University and professor Jason D. Yeatman of Stanford University. 

January 28, 2020

Noelle Merclich works to make the Allen School experience a great one for incoming undergrads

From computer science to linguistics and kickboxing to baking, this month’s undergraduate student spotlight, Noelle Merclich, is driven to creating a welcoming environment in the Allen School, serving others and always being kind and compassionate. During a month when people are making and struggling to stick to new resolutions, the Maple Valley, Wa., native and junior computer science major resolved long ago to do her best and try new things every day. 

Allen School: Why did you choose to major in computer science?

Noelle Merchlich: After taking a couple years of programming in high school, I realized I really enjoyed the puzzle solving nature of it and how computer science impacts nearly every other possible career. Also my dad constantly jokes about how his return on investment for my college education is my ability to pay for him to take trips around the world, so the financial stability doesn’t hurt either. 

Allen School: What do you like most about being an Allen School student?

NM: Definitely the people. Some of my favorite memories of the last few years include playing card games in the labs until 2 a.m. after barely finishing an assignment before the deadline and baking a surprise birthday cake in the residence halls. My experience at UW wouldn’t be the same without the friends I’ve met in the Allen School, and I know I definitely wouldn’t have survived most finals weeks without their help.

Allen School: What do you like about being a TA for the CSE Startup course and Direct Admission seminars? 

NM: I’ve had the opportunity to be a TA for CSE Startup for three years now. It’s been exciting to see how the course has changed and grown over time. I enjoy working with the instructor, Lauren Bricker, and undergrad adviser Leslie Ikeda to improve the curriculum to best fit the needs of our students during their first experience in college and at the Allen School. I appreciate how I’m able to help the curriculum evolve along with my own experiences at UW. As for the DA seminar, I like how I’m able to help a larger scale of students with their transition to the Allen School. Realistically, my experience in computer science has always been a positive one since my parents have always supported my aspirations. And it’s never seemed abnormal to be a woman in the industry since my first two computer science teachers were women. However, I realize this is definitely not how everyone is initially exposed to the field, so I try to use my role in the DA seminar to help make each incoming student feel as though they have a place in computer science in whatever way I can.

Allen School: Why did you choose to minor in linguistics? 

NM: My interest in linguistics began after giving a presentation on Noam Chomsky for a psychology class. When I took Ling 200: Intro to Linguistic Thought,  I was really fascinated by the universality of how we break down language, so I took a few more classes in the linguistics department. I committed to completing my minor because over the past two years I’ve developed an interest in natural language processing, and understanding concepts like how syntax and semantics work together to form meaning is helpful with that.

Allen School: What are some of your favorite experiences or activities at the UW?

NM:  I started taking kickboxing classes my freshman year because I’ve always thought it seemed really fun. Now it’s become a way for me to punch my stress away. Also, before I decided on pursuing computer science I thought I would go to culinary school to become a pastry chef. It’s safe to say that I don’t just like desserts, I love them. I’m constantly trying out different bakeries around campus and Seattle to find the best macarons, tres leches, or anything sweet. I highly recommend Cubes and Le Panier.

Allen School: Who or what inspires you?

NM: My grandmother, Rosa. In spite of all the drama and tragedy she faced throughout her life, she maintained such kindness and compassion for everyone. When I was about 14, I remember helping her make homemade spaghetti and meatballs to give to the construction workers doing renovations on her neighbor’s house at the end of the block. Although she passed away at the end of my freshman year, she has always motivated me to be more kind, patient, and helpful to those around me. Her example is part of why I’m so passionate about computer science outreach. I try to find ways to connect students to computer science who normally wouldn’t have the resources to get started themselves. As a result, when one of the Allen School advisers sent out an application for HCDE’s alternative spring break group, I jumped at the chance. For two years I’ve had the incredible opportunity of being part of a team that created curricula for teaching introductory programming concepts to middle and high school students in rural Neah Bay, WA. It was definitely one of the more challenging and fulfilling college experiences I’ve had.

We are inspired by Noelle’s contributions to the Allen School and her outreach work! 

January 21, 2020

Seeing the forest for the trees: UW team advances explainable AI for popular machine learning models used to predict human disease and mortality risks

Tree-based machine learning models are among the most popular non-linear predictive learning models in use today, with applications in a variety of domains such as medicine, finance, advertising, supply chain management, and more. These models are often described as a “black box” — while their predictions are based on user inputs, how the models arrived at their predictions using those inputs is shrouded in mystery. This is problematic for some use cases, such as medicine, where the patterns and individual variability a model might uncover among various factors can be as important as the prediction itself.

Now, thanks to researchers in the Allen School’s Laboratory of Artificial Intelligence for Medicine and Science (AIMS Lab) and UW Medicine, the path from inputs to predicted outcome has become a lot less dense. In a paper published today in the journal Nature Machine Intelligence, the team presents TreeExplainer, a novel set of tools rooted in game theory that enables exact computation of optimal local explanations for tree-based models. 

While there are multiple ways of computing global measures of feature importance that gauge their impact on the model as a whole, TreeExplainer is the first tractable method capable of quantifying an input feature’s local importance to an individual prediction while simultaneously measuring the effect of interactions among multiple features using exact fair allocation rules from game theory. By precisely computing these local explanations across an entire dataset, the tool also yields a deeper understanding of the global behavior of the model. Unlike previous methods for calculating local effects that are impractical or inconsistent when applied to tree-based models and large datasets, TreeExplainer produces rapid local explanations with a high degree of interpretability and strong consistency guarantees.

“For many applications that rely on machine learning predictions to guide decision-making, it is important that models are both accurate and interpretable — meaning we can understand how a model combined and weighted the various input features in predicting a certain result,” explained lead author and recent Allen School alumnus Scott Lundberg (Ph.D., ‘19), now a senior researcher at Microsoft Research. “Precise local explanations of this process can uncover patterns that we otherwise might not see. In medicine, factors such as a person’s age, sex, blood pressure, and body mass index can predict their risk of developing certain conditions or complications. By offering a more robust picture of how these factors contribute, our approach can yield more actionable insights, and hopefully, more positive patient outcomes.”

Diagram illustrating difference between "black box" and TreeExplainer models
Many predictive AI models are a “black box” that offer predictions without explaining how they arrived at their results. TreeExplainer produces local explanations by assigning a numeric measure of credit to each input feature, such as factors that contribute to mortality risk shown in the example above. The ability to compute local explanations across all samples in a dataset can yield a greater understanding of global model structure.

Lundberg and his colleagues offer a new approach to attributing local importance to input features in trees that is both principled and computationally efficient. Their method draws upon game theory to calculate feature importance as classic Shapley values, reducing the complexity of the calculation from exponential to polynomial time to produce explanations that are guaranteed to always be both locally accurate and consistent. To capture interaction effects, the team introduces Shapley Additive Explanation (SHAP) interaction values. These offer a new, richer type of local explanation that employs the Shapley interaction index — a relatively recent concept in game theory — to produce a matrix of feature attributions with uniqueness guarantees similar to Shapley values.

This dual approach enables separate consideration of the main contributions and the interaction effects of features that lead to an individual model prediction, which can uncover patterns in the data that may not be immediately apparent. By combining local explanations from across an entire dataset, TreeExplainer offers a more complete global representation of feature performance that both improves the detection of feature dependencies and succinctly shows the magnitude, prevalence, and direction of each feature’s effect — all while avoiding the inconsistency problems inherent in previous methods.

In a clinical setting, TreeExplainer can provide a global view of the dependency of certain patient risk factors while also highlighting variabilities in individual risk. In their paper, the UW researchers describe several new methods they developed that make use of the local explanations from TreeExplainer to capture global patterns and glean rich insights into a model’s behavior, using multiple medical datasets. For example, the team applied a technique called local model summarization to uncover a set of rare but high-magnitude risk factors for mortality. These are inputs such as high blood protein that are shown to have low global importance, and yet they are extremely important for some individuals’ mortality risk. Another experiment in which the researchers analyzed local interactions for chronic kidney disease revealed a noteworthy connection between high white blood cell counts and high blood urea nitrogen; the team found that the model assigned higher risk to the former when it was accompanied by the latter.

In addition to discerning these patterns, the researchers were able to identify population sub-groups that shared mortality-related risk factors and complementary diagnostic indicators for kidney disease using a technique called local explanation embeddings. In this approach, each sample is embedded into a new “explanation space” to enable supervised clustering in which samples are grouped together based on their explanations. For the mortality dataset, the experiment revealed certain sub-groups within the broader age groups that share specific risk factors, such as younger individuals with inflammation markers or older individuals who are underweight, that would not be apparent using a simple unsupervised clustering method. Unsupervised clustering also would not have revealed how two of the strongest predictors of end-stage renal disease — high blood creatinine levels, and a high ratio of urine protein to urine creatinine — can each be used to identify a set of unique at-risk individuals and should be measured in parallel. 

Portraits of AIMS Lab researchers with white block "W" on purple background
AIMS Lab researchers, top row from left: Su-In Lee, Scott Lundberg, and Gabriel Erion; bottom row, from left: Hugh Chen and Alex DeGrave

Beyond revealing new patterns of patient risk, the team’s approach also proved useful for exercising quality control over the models themselves. To demonstrate, the researchers monitored a simulated deployment of a hospital procedure duration model. Using TreeExplainer, they were able to identify intentionally introduced errors as well as previously undiscovered problems with input features that degraded the model’s performance over time.

“With TreeExplainer, we aim to break out of the so-called black box and understand how machine learning models arrive at their predictions. This is particularly important in settings such as medicine, where these models can have a profound impact upon people’s lives,” observed Allen School professor Su-In Lee, senior author and director of the AIMS Lab. “We’ve shown how TreeExplainer can enhance our understanding of risk factors for adverse health events.

“Given the popularity of tree-based machine learning models beyond medicine, our work will advance explainable artificial intelligence for a wide range of applications,” she said.

Lee and Lundberg co-authored the paper with joint UW Ph.D./M.D. students Gabriel Erion and Alex DeGrave; Allen School Ph.D. student Hugh Chen; Dr. Jordan Prutkin of the UW Medicine Division of Cardiology; Bala Nair of the UW Medicine Department of Anesthesiology & Pain Medicine; and Ronit Katz, Dr. Jonathan Himmelfarb, and Dr. Nisha Bansal of the Kidney Research Institute.

Learn more about TreeExplainer in the Nature Machine Intelligence paper here and the project webpage here.

January 17, 2020

Allen School’s Aditya Kusupati earns Best Paper Runner-Up at BuildSys 2019 for new low-power, deep learning algorithm for radar classification

A team of researchers that includes Allen School Ph.D. student Aditya Kusupati has developed a new low-power real-time solution for mote-scale (tiny sensor with a weak microprocessor) radar-based intruder detection. Their work has enabled the first end-to-end deep learning solution for radar classification and won the “Best Paper Runner-Up Award” at the Association for Computing Machiner’s 6th International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation (BuildSys 2019) last month.

With the rapid growth of the Internet of Things sensors, there is an increased need for more sophisticated but efficient sensors. In their paper, Kusupati and the team uses a low cost, Arm Cortex M3 processor, which has only 96 KB of RAM for a more cost and energy-efficient solution.

“Imagine a situation inside a wildlife reserve, secluded from the modern world, with intermittent network connectivity and vast expanses to monitor,” said Kusupati. “You would ideally want to catch poachers, or intruders in a general setting, as soon as possible using minimal energy as the battery on your radar is limited since you don’t have electricity in the reserve. The technique we developed is the first deep learning based real-time solution that’s at least three times faster and more accurate than the existing state-of-the-art methods.”

The techniques proposed in the paper were also put to use to create a demonstration which was additionally presented at the conference. The solution proposed is hierarchical and so computationally very efficient while the models generated are tiny enough to fit and work in 96 kilobytes of RAM.

“The applications are varied and in the case of a more general use, it could be used as an inexpensive and non-intrusive intruder detection system for smart cities and even enable smart lights and more,” said Kusupati.  

Aditya worked on this while at Microsoft Research India before joining the Allen School. Collaborators include Ohio State University Ph.D. students Dhrubojyoti Roy, Sangeeta Srivastava and professor Anish AroraPranshu Jain, a Ph.D. student in IIT Delhi and Manik Varma, senior principal researcher at Microsoft Research India.

To learn more, read the research paper here.

December 23, 2019

Professional gaming inspired Allen School undergrad Kevin Ryoo to study computer science

In this month’s Undergrad Spotlight, we check in with a student who may need no introduction to gaming fans. Kevin Ryoo — a second year Allen School student who transferred from Highline College — built a career as a world champion gamer before deciding to study computer science. In fact, according to Ryoo, playing games all day every day inspired his academic pursuits to learn how to design and build software. As his education advanced, his interest in computer vision, machine learning and artificial intelligence has grown. Ryoo has an upcoming internship with Shopify, after Tweeting about needing an internship. Shopify’s CEO Tobias Lütke saw it, and knowing about Ryoo’s professional gaming career, offered him an internship on the spot.  

Allen School: When did you start gaming, and how did you become so successful at it? 

Kevin Ryoo: I started playing games professionally when I was 14 years old. I played a game called Warcraft 3. Even back then (in 2002), it had a match-making system where you get matched with a player at the same skill level. I kept winning and my ranking improved. Eventually I was ranked in the top 16 and my name was on the first page of players. Because of that, pro-gaming teams reached out to me with offers. That’s how I first got into it.

I think I was able to become successful in gaming because I had the motivation and dedication to do my best. I really liked the feeling of winning a game — I hated losing, it was stressful to me. So I practiced a lot and whenever I lost, I watched the replay of the game to learn from my mistakes. I repeated it again and again and as a result, I was able to become a champion in multiple competitions. I worked hard to become a gaming nerd. 

Allen School: How did you build a career in professional gaming, and how did it change your life?

KR: When I was 16 years old, I won a tournament called the World Cyber Games (WCG). It took place once a year and follows a structure similar to the Olympics, in which regional champions compete to represent their countries. In turn, those winners compete for international glory. By winning the tournament two years in a row, in 2005 and 2006, I became a world-renowned Esports gamer and was even inducted into the WCG Hall of Fame. After that, I quit gaming to finish high school. While waiting for my green card to go to college [Ryoo moved to the USA from Korea at the age of 16], I started to play Starcraft 2. As my ranking went up, I decided to do pro-gaming again. In Starcraft 2, I won a Blizzcon US Championship, which is my second biggest achievement, and I also got 2nd and 3rd place in Major League Gaming.

Before a competition

I really loved my pro-gaming life and learned a lot from it. I traveled the world, made new friends from interesting places and learned to appreciate humanity’s rich diversity. In addition to meeting gamers and fans on the road, I became a more effective communicator by becoming an online streamer on At any given moment, I would have roughly 7,000 people watching live on my Twitch streaming channel. I really enjoyed these sessions because, in addition to showcasing my craft, I could communicate with people from other countries, share opinions, and develop an appreciation for the things that make different groups unique and wonderful. Through this, I naturally became much more personable, social optimistic, and open-minded person. Unfortunately, I have no time to spend on gaming right now. But I am fine with it because I am motivated to study computer science, not gaming, at this moment.

Allen School: What do you find most enjoyable about being an Allen School student? 

KR: The Allen School has all the resources that students need. The professors are great, the TAs have a lot of office hours, there are tons of tech-talks from top companies, a great career fair that also includes the top companies, amazing advisers, a new Bill & Melinda Gates Center to study in, and the labs have impressive equipment. I always feel so fulfilled by these resources and I never feel alone.

Allen School: What activities and interests do you have outside of your studies?

KR: I am a member of the UW Association for Computing Machinery (ACM) and have been following and participating in the events. I will also be on the panel for the next Husky Gaming Expo.  

Check out Kevin’s Geek of the Week profile here and read more about him in The Daily. We are proud to have Kevin as a member of the Allen School community — FTW! 

December 20, 2019

Carpentry Compiler program applies lessons from computer programming to modernize fabrication processes

UW researchers have created a tool that allows users to design woodworking projects and create optimized fabrication instructions based on the materials and equipment a user has available. Liang He/University of Washington

Researchers at the University of Washington have created a digital tool to optimize the design and fabrication of woodworking projects by drawing inspiration from computing’s move to decouple hardware and software development. The resulting program, Carpentry Compiler, allows users to create a design, then find the best step-by-step, tool-specific instructions to bring the design to fruition. The compiler can generate and combine multiple fabrication processes to create one single design.

“To make a good design, you need to think about how it will be made,” senior author Adriana Schulz, a professor in the Allen School’s Graphics and Imaging Laboratory (GRAIL), explained in a UW News release. “Then we have this very difficult problem of optimizing the fabrication instructions while we are also optimizing the design. But if you think of both design and fabrication as programs, you can use methods from programming languages to solve problems in carpentry, which is really cool.”

Schulz and the team, which includes professor Zachary Tatlock and Ph.D. student Chandrakana Nandi of the Allen School’s Programming Languages & Software Engineering (PLSE) group, and Jeffrey Lipton,  an adjunct professor at the Allen School, considered that a design is a sequence of geometric construction operations while fabrication is a sequence of physical instructions. They found that by drawing ideas from modern computer systems, they could optimize fabrication processes to improve accuracy while reducing fabrication time and material costs. To do so, they created Hardware Extensible Languages for Manufacturing, or HELM. The system combines a high-level language for designing the project and a low-level language for the process of building it. The researchers also designed a compiler to map the designs to all of the available fabrication plans. They made the architecture design extensible so that new fabrication hardware can be added. 

Users of HELM can enter materials, parts and tools into the program, which then shapes the most optimal fabrication process based on what the carpenter has at the ready. 

The compiler explores all of the possible combinations of instructions and uses programming to find the best directions for the project. One program might have the best process to make the roof of a bird house, the other the best way to build the rest of the house. The compiler will combine them to find the best directions for the project while enabling the user to make various design tradeoffs to suit their needs and preferences.

“The future of manufacturing is about being able to create diverse, customizable high-performing parts,” Schulz said. “Previous revolutions have been about productivity mostly. But now it’s about what we can make. And who can make it.”

Additional co-authors include visiting doctoral student Chenming Wu from Tsinghua University, and Allen School postdoctoral researcher in GRAIL Haisen Zhao. The team presented this research last month at SIGGRAPH Asia in Brisbane, Australia.

To learn more, read the full publication, “Carpentry Compiler,” watch the group’s video, and check out the related UW News release and coverage by TechCrunch and Popular Woodworking.

December 19, 2019

Allen School faculty and alumni honored by ACM and IEEE for advancing the field of computing through research and service

Two of the premier professional societies in the field of computing, the Association for Computing Machinery (ACM) and the IEEE, recently announced their latest class of Fellows representing the highest status accorded to their respective memberships. Three Allen School faculty members reached that pinnacle this year: Magdalena Balazinska and Paul Beame, who were named Fellows of the ACM, and Joshua R. Smith, who was elevated to Fellow of the IEEE. In addition to this year’s faculty honorees, former Allen School postdoc Aaron Hertzmann of Adobe Research and undergraduate alumnus Brad Calder (B.S., ‘91) of Google were named Fellows of the IEEE and ACM, respectively.

With the latest announcements, a total of 24 current or former Allen School faculty members have been elevated to ACM Fellow, and 16 have achieved IEEE Fellow status.

Magdalena Balazinska, Fellow of the ACM

Portrait of Magda Balazinska

Professor and incoming Allen School director Magdalena Balazinska was named a Fellow of the ACM for her contributions to scalable distributed data systems. Balazinska, who is a member of the UW Database Group and former director of the eScience Institute, focuses on advancing new and improved data management tools and techniques for data science, big data systems, cloud computing, and the rapidly growing field of imaging and video analytics. 

“The ACM Fellow recognition is an immense honor, and I would like to thank the ACM and everyone who contributed to this nomination for their support,” said Balazinska. “I would also like to thank all my students and collaborators over the years. While I’m the one recognized today, the research accomplishments were a team effort.”

Balazinska, who joined the University of Washington faculty in 2006, was among a wave of researchers driving technical innovation in the early days of stream processing systems. She was lead author of a seminal paper in 2005 that introduced a new method for increasing the fault tolerance of distributed data-intensive stream processing applications. At the time, these applications were becoming increasingly prevalent in a variety of domains, including computer networking, financial services, medical information systems, and the military. The paper, which drew upon Balazinska’s Ph.D. research at MIT as part of the Borealis project, employed a replication-based approach that increased the resiliency of applications while offering a configurable trade-off between availability and consistency. Balazinska and her collaborators were recognized for the enduring impact of their work in 2017 with a Test of Time Award from the ACM’s Special Interest Group on the Management of Data (ACM SIGMOD). Previously, Balazinska earned a Test of Time Award from the Working Conference on Reverse Engineering (WCRE) in 2000 for her work on advanced clone analysis for automatically refactoring software code.

An internationally recognized leader in data management research and innovation, Balazinska earned the inaugural VLDB Women in Database Research Award from the conference on Very Large Data Bases in 2016 for her inspirational record of contributions to scalable distributed data systems. That record includes the Nuage project, which enabled scientists in a variety of domains to store, share, and analyze massive quantities of data in the cloud using Hadoop. Nuage was a product of her work with AstroDB, a group she co-founded with her fellow computer scientists and UW astronomers to develop new capabilities for analyzing the massive quantities of data generated by high-powered telescopes and simulations. Balazinska also co-led the Myria project to accelerate big data as a service that incorporated her work on new techniques for efficient iterative processing, multi-way join processing, and federated analytics. She and the team designed the Myria system to provide a fast, flexible, cloud-based service for scaling and optimizing data manipulation tasks — including otherwise time-consuming tasks such as cleaning, filtering, grouping, and evaluation — in preparation for statistical analysis or deployment in machine learning. Along the way, the Myria project introduced an array of innovative approaches to the provision of cloud-based database services, such as basing them on performance levels rather than resources, offering shared optimizations, enabling elastic memory-management, and more.

Balazinska’s contributions to the field of data management extend beyond her research contributions to curriculum development and leadership. Under a major IGERT grant from the National Science Foundation, she led the creation of the UW’s Data Science and Advanced Data Science Ph.D. options — currently offered to students in more than a dozen departments or schools — and co-led the effort to create an Undergraduate Data Science option for students to supplement their major program of study by acquiring skills in this rapidly burgeoning field. Before her appointment to lead the Allen School, Balazinska served as the university’s first Associate Vice Provost for Data Science and spent two years at the helm of the interdisciplinary eScience Institute, which promotes programs and best practices that support data-intensive discovery across campus.

“The overarching goal of my work is to empower people to work with data so that they do not need to be data management specialists to gain new knowledge and accelerate the pace of innovation,” Balazinska explained. “I also think we need to give students the opportunity to become well-versed in these techniques, which are becoming increasingly important to many different fields other than computer science.”

Paul Beame, Fellow of the ACM

Portrait of Paul Beame

Paul Beame, a professor in the Allen School’s Theory of Computation group, was named a Fellow of the ACM for his contributions in computational and proof complexity and their applications as well as for outstanding service to the computing community. Beame, who has been a member of the UW faculty since 1987 and currently serves as an associate director of the Allen School, is widely known for his work that spans both pure and applied complexity. The latter is unusual, since the subject of computational complexity is generally not regarded as an applied discipline.

Beame is widely known for his work in computational complexity that provides lower bounds and algorithms that yield optimal bounds for a range of core computational problems. Notable contributions include the first optimal depth circuits for integer division, which matched the depth for the other basic arithmetic operations, and optimal time-space tradeoff lower bounds for integer sorting. He and his co-authors proved the strongest time-space tradeoff lower bounds known for any Boolean function in P — a result that has not been improved upon in nearly 20 years — and devised an optimal data structure for the predecessor problem, which is a substantial improvement over binary search.

Beame’s fundamental work in proof complexity, which examines the sizes of proofs and ways of expressing them, played a significant role in the growth of the field. Proof complexity seeks to answer the question of whether there is a method that always permits the writing of small proofs; if there is, then NP=coNP. Among Beame’s most significant achievements in this domain was the establishment of the first exponential lower bounds on the size of constant-depth proofs, which were previously only known for resolution, a depth one proof system. He also introduced an algebraic proof system based on multivariate polynomial equations for Hilbert’s Nullstellensatz and described the first non-trivial lower bounds for algebraic proofs of propositional logic. 

Another area in which Beame has made significant contributions is applied computational complexity. This work has involved taking insights from computational complexity and proof complexity building on them and applying them to research problems in a variety of other domains, including formal methods for software engineering, computer-aided verification, database theory, satisfiability (SAT) solving, artificial intelligence, and machine learning. For example, Beame and his collaborators were among the first to apply model checking to issues related to software by showing how symbolic model checking could be effectively applied to software specifications in addition to the hardware models for which it previously had been used. His paper analyzing SAT solvers using proof complexity, which presented the first precise characterization of clause learning as a proof system, is among his most cited work. Beame has also contributed influential research on the complexity of probabilistic inference, including the Cachet model counter and lower bounds for knowledge representation for probabilistic inference. His research on optimal lower bounds and algorithms for database query evaluation in the massively parallel computation (MPC) model not only introduced the lower bound model, but actually coined the MPC term for such computations that is now widely used.

“Throughout my career, I have been drawn to understanding the complexity of solving concrete computational problems,” Beame said. “In the collaborative atmosphere of the Allen School, I have been able to learn about new concrete problems and find areas where I can apply computational complexity ideas just by walking down the hall and talking with colleagues. This has led me to a broader view of computational complexity as both a pure and applied field. The success of those collaborations has been very gratifying, as is this recognition by ACM.”

In addition to his research accomplishments, Beame has devoted decades of service to the computing community. He spent more than 15 years in volunteer leadership — first as vice chair, and then chair — of the IEEE Computer Society’s Technical Committee on Mathematical Foundations of Computing (TCMF), sponsor of the annual Symposium on Foundations of Computer Science (FOCS). He went on to chair the ACM Special Interest Group on Algorithms and Computation Theory (SIGACT), sponsor of the annual Symposium on the Theory of Computing (STOC). In that role, he was instrumental in the creation of the STOC Theoryfest. He simultaneously served for 4 years as a member-at-large of the ACM Council, where he successfully advocated for providing open access to ACM conference proceedings. Beame also has earned the appreciation of colleagues for taking the lead in creating and maintaining a comprehensive online history of both the STOC and FOCS conferences as a reference for the theoretical computer science community.

“The theoretical computer science community has provided a great environment for research and involves an extraordinarily talented group of people,” Beame observed. “I have just wanted to do my part in sustaining that community and keeping it vibrant; doing that has naturally led me to work on broader issues, like open access, that are of interest to all researchers.”

Joshua R. Smith, Fellow of the IEEE

Professor Joshua R. Smith, who holds a joint appointment in the Allen School and Department of Electrical & Computer Engineering, was named a Fellow of the IEEE in recognition of his contributions to far‐ and near‐field wireless power, backscatter communication, and electric field sensing.

“I am so grateful for this award, which recognizes the impact of work with many wonderful students and collaborators over the years,” said Smith, who is the Milton and Delia Zeutschel Professor in Entrepreneurial Excellence at UW, where he leads the Sensor Systems Laboratory. “I thank my family for their support and enthusiasm over so many years.”

Smith’s work on sensing and wireless power has had far-reaching impact in a variety of industries. His early research while a Ph.D. student at MIT focused on electric field sensing, now known as mutual capacitance sensing, which enables non-contact sensing of the three-dimensional position, orientation and mass distribution of a person’s body. This work formed the basis for a system adopted by automobile manufacturers that enables intelligent airbag deployment decisions based on a passenger’s size and body configuration. Mutual capacitance went on to be widely adopted in touchscreens starting with Apple’s iPhone, and subsequently, most other smartphones. Later, Smith built mutual capacitance sensors into robot fingers to create electric field pretouch, which improves a robot’s manipulation capabilities by allowing its finger to detect an object before contact. 

Before joining the UW faculty in 2011, Smith spent five years at Intel Research Seattle, where he focused on creating new capabilities in wireless power, wireless sensing, and robotics. One of the projects initiated during his time at Intel was the Wireless Identification and Sensing Platform (WISP). A collaboration between Intel and UW, WISP offered the first fully programmable platform for wireless, battery-free sensing and computation powered by radio waves. The team went on to earn a Best Paper Award at the 2009 IEEE International Conference on RFID for integrating capacitive touch sensing into passive RFID tags using WISP technology. Smith also led the development of the wireless resonant energy link (WREL), which uses magnetically coupled resonators to efficiently transfer of wireless power even as range, orientation and load vary. Smith’s first Ph.D. student, Alanson Sample, now a faculty member in Electrical Engineering & Computer Science at the University of Michigan, was a key contributor to both WISP and WREL. Smith, together with heart surgeon Dr. Pramod Bonde of Yale University, evolved the WREL technology into FREED, the free-range resonant electrical energy delivery system for powering a ventricular assist device implanted in the human body — without requiring the traditional wire through the patient’s chest. This work on wireless power for implanted devices led to a series of other projects on power and communication for neural implants through the Center for Neurotechnology, a National Science Foundation Engineering Research Center, where Smith is Thrust Leader for Communication and Interface; and the University of Washington Institute for Neuroengineering (UWIN) funded by the Washington Research Foundation.

After his arrival at UW, Smith continued to build upon his previous work. Aiming to push the boundaries of wireless computing even further, he teamed up with Allen School professor Shyam Gollakota to develop ambient backscatter, a technique that leverages existing, ambient wireless television and cellular signals into a source of power as well as a communication medium which earned a Best Paper Award at SIGCOMM 2013. The researchers later extended backscatter communication to WiFi with passive WiFi, which received a Best Paper Award at NSDI 2016. To enable internet-connected implantable devices to communicate with commodity devices such as smartphones and smart watches, they developed interscatter, a technique for using backscatter to transform wireless transmissions over the air from one technology to another that earned a Best Paper Award at SIGCOMM 2016. Smith and his collaborators extended the utility of their approach with long-range backscatter, the first wide-area backscatter communication system that achieves coverage at distances up to 2.8 kilometers — orders of magnitude greater than prior systems — that garnered a Distinguished Paper Award at IMWUT 2017. Smith also co-led the UW team behind the world’s first battery‐free phone. The team has also developed a series of ultra-low-power battery-free wireless cameras that communicate via backscatter.

Smith has co-founded three venture-backed UW start-up companies based on his work: Wibotic, developer of near-field wireless robot charging systems, with CEO Ben Waters, a UW Ph.D. alumnus; Jeeva Wireless, developer of ultra-low power communication systems based on backscatter innovation, with Gollakota and UW alumni Bryce Kellogg, Aaron Parks, and Vamsi Talla; and Proprio, developer of light-field capture and visualization solutions to aid surgery, with Allen School Ph.D. student James Youngquist; UW Foster Business School alumnus Gabe Jones; Ken Denman, venture partner at Sway Ventures and member of the UW Foundation Board; and Dr. Sam Browd, a neurosurgeon at UW Medicine and Seattle Children’s Hospital.

“UW is such a supportive environment,” Smith said. “It is a privilege to work with so many wonderful colleagues and students, at an institution that is firing on all cylinders.”

Aaron Hertzmann, Fellow of the IEEE

Portrait of Aaron Hertzmann

Smith is joined among the latest class of IEEE Fellows by Aaron Hertzmann, who completed a postdoc in the Allen School’s Graphics & Imaging Laboratory (GRAIL) before going on to spend 10 years on the computer science faculty at the University of Toronto. Hertzmann is currently a principal scientist at Adobe Research and has been an affiliate professor at the Allen School since 2005. He was recognized by IEEE for contributions to computer graphics and animation, following on the heels of his selection as a Fellow of the ACM last year.

Hertzmann’s research draws from computer graphics, machine learning, animation and other fields. He devoted his early research career to advancing new methods for extracting meaning from images and for modeling human visual capabilities, motion, and 3D structure. Hertzmann also produced a variety of novel software tools for creating expressive, artistic imagery and animation that mimics human drawing and painting. Many of his contributions have been adopted by the broader graphics, gaming, and special effects communities. 

Hertzmann’s most recent work focuses on the development of techniques for producing robust, seamless immersive experiences in virtual reality. These include a new method for incorporating gated clips and view-dependent video textures in 360-degree video to ensure the user doesn’t miss important narrative elements, and a novel approach for introducing motion parallax and real-time 360-degree video playback in VR headsets that improves the immersive experience while reducing the risk of motion sickness. Hertzmann also contributed to the development of Vremiere, a system for direct editing of spherical video for VR environments that was the basis of Adobe’s Project Clover in-VR editing interface.

Lately, Hertzmann has shifted his attention to pushing boundaries in the realm of visual perception and the interplay between art and AI. “I’m currently interested in ways that insights from computer science can inform our understanding of art: of understanding how we create and perceive aesthetics and line drawings,” he explained.

Brad Calder, Fellow of the ACM

Portrait of Brad Calder

Allen School alumnus Brad Calder was named a Fellow of the ACM in recognition of his contributions to cloud storage, processor simulation, replay, and feedback-directed optimization of systems and applications. Calder, who went on to earn his Ph.D. in computer science from the University of Colorado Boulder after graduating from UW, is currently Vice President of Product and Engineering of Technical Infrastructure and Cloud at Google overseeing compute, networking, storage, database and data analytics services. 

Before joining Google, Calder spent nearly 9 years at Microsoft, where he was among the co-founders of the Microsoft Azure cloud computing service. Calder and his team earned a Best Paper Award at USENIX 2012 for introducing a new approach to erasure coding in Windows Azure Storage using local reconstruction codes (LRC) that enabled durable storage with low overhead and consistently low read latencies. Calder previously spent over a decade as a faculty member at the University of California, San Diego, where he co-directed the High Performance Processor Architecture and Compilation Lab. During his tenure at UCSD, where he now serves as an adjunct professor, Calder published more than 100 research papers spanning systems, architecture, and compilers.

The ACM is the world’s largest educational and scientific professional society devoted to advancing the field of computing. The ACM Fellow designation is held by less than 1% of the organization’s global membership. Learn more about the 2019 Class of ACM Fellows here.

IEEE has more than 400,000 members in 160 countries representing diverse engineering fields, from aerospace systems and biomedical engineering, to computing and telecommunications, to electric power and consumer electronics. Each year, IEEE elevates a select group — representing less than one-tenth of 1% of the organization’s global membership — to the status of Fellow based on their extraordinary contributions. View the complete list of 2020 IEEE Fellows here.

Congratulations to Magda, Paul, Josh, Aaron and Brad on this well-deserved recognition!

December 12, 2019

University of Washington students capture 3rd place in high-performance computing contest

From left to right: David Liu, Sungchan Park, Darius Strobeck, Matthew Cinnamon, Andrew Karanov and Thorne Gavin

A team of University of Washington undergraduates won third place in the Student Cluster Competition (SCC) at the 2019 International Conference for High Performance Computing, Networking, Storage and Analysis (SC19). The competition, designed to expose students to the high-performance computing (HPC) community, attracts teams of undergraduate students from around the world. During the 48-hour nonstop competition, each team must assemble small clusters then race to complete real-world workloads. 

Andrew Lumsdaine, affiliate professor in the Allen School and chief scientist for the Northwest Institute for Advanced Computing, worked with the six students over the fall quarter while they designed, acquired and assembled the small clusters that they brought with them to the competition.

Computer engineering student Matthew Cinnamon, computer science students David Liu and Thorne Gavin, applied and computational math sciences student Andrew Karavanov, pre-engineering student Sungchan Park and political science student Darius Strobeck made up the team, called Boundless Dawg. They built a cluster with two nodes, each with two sockets and four graphics processing units (GPUs) to run some benchmarks and HPC applications. They also presented a poster, gave a lightning talk, submitted a report and answered interview questions from the judges.

“The competition itself was very stressful but also very enjoyable. Although we prepared the best we could, there were many unexpected events that presented interesting challenges for us,” said Liu. “We had to acquire a server rack in the hours prior to the start of the competition, diagnose initial poor benchmark results, determine how to run our workloads in the time allotted, recover from burning half our credits for cloud resources and time on a bad application run and rush out a report in the final hours.” 

Despite the unexpected challenges and limited time and resources, Liu thought he and his teammates completed a good number of real-world scientific workloads. Overall, he found the experience to be really rewarding and was grateful for the opportunity to compete. Lumsdaine, the team’s advisor, was similarly enthusiastic about the outcome.

“I am incredibly pleased with their performance,” said Lumsdaine. “The teams we went up against were well-financed and had much more experience. We had the fewest number of nodes, CPUs, GPUs and were a first-time entry.” 

Lumsdaine added that it was a terrific event for the students and they are all excited to compete again in the future. 

Learn more about Boundless Dawg, including how they got their name and the fact that none of them had any supercomputing experience prior to the competition, here

Way to go, team! 

December 11, 2019

Back-to-school ritual takes on new significance for Allen School graduates turned faculty members

Fall in Seattle is signified by the sight of trees turning from green to rust, the sound of raindrops striking rooftops, and the energy infusing the University of Washington campus as students embark upon a new journey of intellectual and personal exploration. For a group of graduating Allen School Ph.D. students, this quintessential autumn ritual carries an added significance as they look forward to their new careers as faculty members at universities across the country and beyond.

Meet the 11 outstanding scholars who are set to extend the Allen School’s impact through teaching, research, and service:

James Bornholt: University of Texas at Austin

James Bornholt

James Bornholt will join the computer science faculty at the University of Texas at Austin next fall after spending a year as an applied scientist in Amazon Web Services’ Automated Reasoning Group. Bornholt completed his Ph.D. working with professors Emina Torlak, Dan Grossman, and Luis Ceze on research spanning programming languages, systems, and architecture as a member of the Allen School’s UNSAT and Programming Languages & Software Engineering (PLSE) groups. Bornholt earned a Best Paper Award at OSDI 2016 for Yggdrasil, a toolkit enabling programmers to write file systems with push-button verification, and a Distinguished Artifact Award at OOPSLA 2018 for SymPro, a novel technique for diagnosing performance bottlenecks in solver-aided code through symbolic evaluation. Bornholt was also an early contributor to the development of an archival storage system for digital data in synthetic DNA as part of the Molecular Information Systems Lab, a collaboration between the University of Washington and Microsoft Research. The project was selected as an IEEE Micro Top Pick in computer architecture for 2016.

Tianqi Chen: Carnegie Mellon University

Tianqi Chen

Tianqi Chen will join the faculty of Carnegie Mellon University in 2020 after spending a year at UW spinout OctoML. He completed his Ph.D. working with Allen School professors Carlos Guestrin, Luis Ceze, and Arvind Krishnamurthy as a member of both the MODE Lab and interdisciplinary SAMPL group. Chen’s research encompasses machine learning and multiple layers of the system stack. He was one of the principal architects of the TVM framework, an end-to-end compiler stack designed to bridge the gap between deep learning and hardware innovation by enabling researchers and technologists to rapidly deploy deep learning applications on a range of devices without sacrificing power or speed. The Allen School team transitioned TVM to the non-profit Apache Software Foundation as an Apache Incubator project earlier this year. Chen also co-created XGBoost, an open-source, end-to-end tree boosting system that is designed to be highly efficient, flexible, and portable and which has been widely adopted by industry, and Apache MXNet, an open-source, deep learning framework that supports flexible prototyping and production that was adopted by Amazon Web Services.

Eunsol Choi: University of Texas at Austin

Eunsol Choi

Eunsol Choi will join the faculty of the University of Texas at Austin next fall after spending a year as a visiting research scientist at Google AI in New York City. Choi completed her Ph.D. as a member of the Allen School’s Natural Language Processing group working alongside professors Luke Zettlemoyer and Yejin Choi. Her research focuses on methods of extracting and querying information from text, particularly structured representations of human information such as scientific findings, historical facts, and opinions using natural language questions. Choi was lead author of  multiple papers on this topic, including a novel framework for coarse-to-fine question answering that matched or outperformed existing models while scaling to longer documents, and a new dataset for exploring question answering in context (QuAC) that draws upon 14,000 information-seeking dialogs between teacher and student. Choi also contributed to an analysis of the linguistic patterns of news articles and political statements to determine whether content is trustworthy, unreliable, or satirical.

Jialin Li: National University of Singapore

Jialin Li

Jialin Li accepted a faculty position at the National University of Singapore after completing his Ph.D. in the Computer Systems Lab, where he worked with Allen School professors Arvind Krishnamurthy and Tom Anderson and affiliate professor Dan Ports of Microsoft Research. Li builds practical distributed systems that combine strong consistency with high performance. One example is Arrakis, a project that earned a Best Paper Award at OSDI 2014. Arrakis is a new operating system that separates the OS kernel from normal application execution to allow applications access to the full power of the unmediated hardware. Li and his colleagues later received a Best Paper Award at NSDI 2015 for Speculative Paxos, a new replication protocol for distributed systems deployed in the data center that employs a new primitive, Multi-Order Multicast (MOM), to achieve significantly higher throughput and lower latency than the standard Paxos protocol. Li was lead author of a subsequent paper that introduced Network-Ordered Paxos (NOPaxos), a system for dividing replication responsibility between the network and protocol layers using another new primitive, Ordered Unreliable Multicast (OUM). NOPaxos achieves replication in the data center without the performance cost of traditional approaches.

Dominik Moritz: Carnegie Mellon University

Dominik Moritz

Dominik Moritz is currently a research scientist at Apple and will join the faculty of CMU’s Data Interaction Group next year. Moritz recently completed his Ph.D. working with Allen School professor Jeffrey Heer and iSchool professor Bill Howe as a member of the Interactive Data Lab and the Database Group. His research focuses on the development of scalable interactive systems for data visualization and analysis. Moritz was a member of the team that developed Vega-Lite, a high-level grammar for rapidly generating interactive data visualizations that earned a Best Paper Award at InfoVis 2016. He subsequently received another Best Paper Award at InfoVis 2018 for Draco, an extension of Vega-Lite that offers a constraint-based tool for building visualizations. Draco formalizes guidelines for visualization design while permitting trade-offs based on user preferences. Moritz also co-created user-centered tools such as Pangloss, which applies optimistic visualization to enable interactive, exploratory data analysis of approximate query results, and Falcon, a web-based system for optimizing latency-sensitive interactions such as brushing and linking that eliminates costly precomputation and enables cold-start exploration of large-scale datasets.

Rajalakshmi Nandakumar: Cornell University

Rajalakshmi Nandakumar

Rajalakshmi Nandakumar will join the faculty of Cornell University next spring as a member of the Jacobs Technion–Cornell Institute at Cornell Tech. Nandakumar earned her Ph.D. working with professor Shyam Gollakota in the Allen School’s Networks & Mobile Systems Lab, where she focused on the development of mobile health applications and novel interaction technologies leveraging the Internet of Things. Her projects included ApneaApp, a mobile app that employed active sonar technology to detect signs of sleep apnea that was commercialized by ResMed as part of the publicly available SleepScore app, and SecondChance, a mobile app for detecting signs of opioid overdose that was presented in the journal Science Translational Medicine and is being commercialized by Sound Life Sciences Inc. She and her colleagues also earned a Best Paper Award at SenSys 2018 for µLocate, a low-power wireless localization system for subcentimeter sized devices. During her time at the Allen School, Nandakumar was recognized with a Paul Baran Young Scholar Award from the Marconi Society, a Graduate Student Innovator of the Year Award from UW CoMotion, and a GeekWire feature as “Geek of the Week.”

Pavel Panchekha: University of Utah

Pavel Panchekha

Pavel Panchekha joined the University of Utah faculty after earning his Ph.D. working with professors Michael Ernst and Zachary Tatlock in the Allen School’s PLSE group. Panchekha’s research focuses on mechanical reasoning and synthesis for domain specific languages, including floating-point numerical programs and web page layout code. He and his colleagues earned a Distinguished Paper Award at PLDI 2015 for Herbie, a tool for finding and fixing floating-point problems. Herbie automatically rewrites floating-point expressions to eliminate numerical rounding errors and improve the accuracy of programs. Panchekha was also a major contributor to the Cassius Project, a framework for reasoning about web page layouts that offers tools for verification, synthesis, and debugging based on an understanding of how web pages render. As part of that project, Panchekha led the development of VizAssert, which verifies the accessibility of page layouts across a range of possible screen sizes, browsers, fonts, and user preferences.

Aditya Vashistha: Cornell University

Aditya Vashistha

Aditya Vashistha is joining Cornell University’s Department of Information Science this fall after completing a stint as a visiting researcher at Microsoft. Vashistha, who has the distinction of having earned the 600th Ph.D. granted by the Allen School, completed his degree working with professor Richard Anderson in the Information & Communication Technology for Development (ICTD) Lab. His research focuses on the development and deployment of novel computing systems for people with disabilities or low literacy and residents of rural communities, including the first-ever analysis of the use of social media platforms by low-income people in India, which earned a Best Student Paper Award at ASSETS 2015, and Sangeet Swara, a voice forum that relies on community moderation to disseminate cultural content in rural India that earned a Best Paper Award at CHI 2015. He and his collaborators later earned an Honorable Mention at CHI 2017 for Respeak, a low-cost, voice-based speech transcription system that provides dignified digital work opportunities in low-resource settings. Vashistha’s work, which earned him a Graduate Student Researcher Award from the UW College of Engineering, so far has reached an estimated 220,000 people in Africa and southern Asia.

Doug Woos: Brown University

Doug Woos

Doug Woos joined the Brown University faculty as a lecturer focused on introductory computer science and courses in systems and programming languages. He recently earned his Ph.D. working with Allen School professors Tom Anderson in the Computer Systems Lab and Michael Ernst and Zachary Tatlock of the PLSE group. In his research, Woos applies techniques from programming languages to systems problems, with a focus on new approaches for verifying and debugging distributed systems. He was a member of the team behind the award-winning Arrakis operating system and co-led the development of Verdi, a novel framework for the formal verification of distributed systems using the Coq proof assistant that supports fault models ranging from idealistic to realistic. Woos and his colleagues later used Verdi to achieve the first full formal verification of the Raft consensus protocol, a critical component of many distributed systems. He also led the development of Oddity, a graphical interactive debugger for distributed systems that combines the power of traditional step-through debugging with the ability to perform exploratory testing.

Mark Yatskar: University of Pennsylvania

Mark Yatskar

Mark Yatskar will take up a faculty position at the University of Pennsylvania next fall after completing his time as an AI2 Young Investigator. He completed his Ph.D. working with Allen School professors Luke Zettlemoyer in the NLP group and Ali Farhadi of GRAIL on research that uses the structure of language to advance new capabilities in computer vision. Yatskar was a member of the team that developed ImSitu, a situation recognition tool that uses visual semantic role labeling to move computers beyond simple object or activity recognition. ImSitu is designed to achieve a more human-like understanding of how participants and objects interact in a scene, enabling computers to predict what will happen next. Yatskar has also studied ways to reduce gender bias in machine learning datasets. For example, he and his collaborators earned a Best Long Paper Award at EMNLP 2017 for presenting Reducing Bias Amplification (RBA), a technique for calibrating the outputs of a structured prediction model to avoid amplifying gender biases ingrained in image labels incorporated into training datasets.

Danyang Zhuo: Duke University

Danyang Zhuo

Danyang Zhuo will join the faculty of Duke University next fall after completing a postdoc in the RISE Lab at the University of California, Berkeley. Zhuo recently completed his Ph.D. working with professors Tom Anderson and Arvind Krishnamurthy in the Allen School’s Computer Systems Lab. His research spans operating systems, distributed systems, and computer networking, with an emphasis on improving the efficiency and reliability of infrastructure and applications in the cloud. One of his early contributions was Machine Fault Tolerance (MFT), a new failure model that improves the resilience of data center systems against undetected CPU, memory and disk errors. Zhuo was lead author of a paper presenting Slim, a low-overhead container overlay network that improves the performance of large-scale distributed applications. Slim manipulates connection-level metadata to enable network virtualization to improve throughput and reduce latency. He also led the development of CorrOPT, a system for mitigating packet corruption in data center networks that was shown to reduce corruption losses by up to six orders of magnitude and improve repair accuracy by 60% compared to the current state of the art.

“Whether our graduates are heading to academia or industry, we are extremely proud of their past achievements and ongoing contributions to our field,” said Allen School director Hank Levy. “But I am thrilled that so many of our outstanding alumni this year will be guiding the next generation of students in using computing to make a positive impact on society. It is exciting to see so many former students become faculty colleagues who will extend the reach of the Allen School and University of Washington around the world.”

In addition to the 11 alumni of the Allen School’s Ph.D. program, two graduates with strong ties to the program also went on to faculty positions. Edward Wang, a graduate of UW’s Department of Electrical & Computer Engineering, recently joined the faculty of the University of California, San Diego. Wang recently earned his Ph.D. working with professor Shwetak Patel in the Allen School’s Ubicomp Lab on new sensing techniques for detecting and monitoring health conditions using mobile devices. Sarah Chasins, meanwhile, spent the past several years embedded in the Allen School and working with professor Rastislav Bodik while completing her Ph.D. from the University of California, Berkeley, where Bodik was previously a faculty member. Chasins, whose research is aimed at democratizing programming and developing tools that make it easy to automate programming tasks, will join the Berkeley faculty next fall.

Congratulations and best wishes to all of our newly-minted faculty colleagues — see you on the conference circuit!

November 26, 2019

Ph.D. student Benjamin Lee named Library of Congress Innovator in Residence

Benjamin Lee (right) poses with fellow Innovator in Residence Brian Foo in Washington, D.C. Kinedy Aristud, Library of Congress

Benjamin Lee, a second-year Ph.D. student in the Allen School’s Artificial Intelligence group working with professor Daniel Weld, has been named a 2020 Innovator in Residence by the Library of Congress. Now in its second year, the Innovator in Residence program aims to enlist artists, researchers, journalists, and others in developing new and creative ways of using the library’s digital collections.

During his residency, Lee will apply deep learning to enable the automatic extraction and tagging of photographs and illustrations contained in the more than 15 million newspaper scans comprising the library’s Chronicling America collection. His goal is to produce interactive visualizations, searchable by topic, that will make the content more accessible to users and support cultural heritage research.

“A primary motivation behind my project is to excite the American public by demonstrating the possibilities of applying machine learning to library collections,” Lee explained in an interview posted on the library’s blog. “Given the widespread enthusiasm about machine learning, this project could draw new people to the Library of Congress’s digital collections, as well as excite the Library’s regular users about emerging technological advances. My hope is that this project could also inspire members of the public to start their own coding projects involving the Library of Congress’s digital collections.”

Lee is no stranger to combining technology and culture, having first developed an interest in digital humanities as an undergraduate at Harvard College. That led to a year-long fellowship at the United States Holocaust Memorial Museum, where he used machine learning to enable new ways for users and researchers to search the archives of the International Tracing Service. His journey into this line of research was a deeply personal one, inspired by his grandmother who survived Auschwitz-Birkenau Concentration Camp during the Holocaust.

Lee previously earned a Graduate Research Fellowship from the National Science Foundation to support his work at the Allen School on explainable artificial intelligence and human-AI interaction. He is one of only two Innovators in Residence named by the library this year; the other, Brian Foo, is a data visualization artist at the American Museum of Natural History who plans to make interesting and culturally relevant material from the library’s audio and moving image collections more accessible to the public by embedding it into hip hop music.

Read the Library of Congress press release here, and an interview with Lee and Foo here. Learn more about the Innovator in Residence program here.

Congratulations, Ben!

November 25, 2019

« Newer PostsOlder Posts »