The award will support Torlak and Wang’s continued work in applying cutting edge verification and synthesis technology to critical infrastructure software. Their most recent project, Jitterbug, is a tool for writing and proving the correctness of just-in-time (JIT) compilers for computers using the extended Berkeley Packet Filter (eBPF) technology. It provides a formal correctness guarantee for eBPF JITs, a security-critical component of the Linux operating system, through a precise specification of JIT correctness and an automated verification strategy that scales to practical implementations.
“Compiler bugs can be very costly in this setting,” Torlak said, “so it’s important to catch them early.”
Jitterbug builds on two of UNSAT’s existing projects: Serval, a framework for creating automated verifiers for systems software, and Rosette, a solver-aided programming language for synthesis and verification. The team has used the tool to find more than 30 new bugs in a number of deployed eBPF JITs and to develop new optimizations. With support from the Amazon award, the group aims to improve the usability of Jitterbug and make the integration with development better.
The UNSAT group was formed when Torlak and Wang joined the Allen School in 2014, with Wang’s expertise in operating systems complementing Torlak’s in programming languages and automated reasoning.
“It’s hard for people to get everything right when building software systems,” Wang said. “How do you build correct systems without too much effort? We created UNSAT to pull our resources together to develop solutions that make this process easier.”
Professor Anna Karlin of the University of Washington’s Theory of Computation group recently became the first Allen School faculty member to be elected a member of the National Academy of Sciences. Karlin, who holds the Bill & Melinda Gates Chair in Computer Science & Engineering and serves as Associate Director for Graduate Studies at the Allen School, was honored for her significant contributions to algorithms and algorithmic game theory. She joins a distinguished community of scholars elected by their peers to advise the nation on matters related to science and technology while promoting education and research.
Much of Karlin’s early research was focused on online and probabilistic algorithms and analysis. Online algorithms receive a series of inputs over time, and must make decisions as each input arrives, without the benefit of information about future inputs. Karlin and her collaborators Mark Manasse, Larry Rudolph and Daniel Sleator coined the phrase “competitive algorithm” to describe an online algorithm that achieves the provably best possible performance compared to the clairvoyant optimal result — in other words, performs nearly as well as if decisions were made with full knowledge of the future.
A useful abstraction and simple illustrative example is the ski rental problem. “When you start out skiing, you don’t know if you’ll love it or hate it or how many times you will end up going,” Karlin explained. “And yet, each time you head out, you have to make a decision: should you rent or should you buy?
“If you rent until the total amount you’ve spent equals the cost of buying skis,” she went on, “you will never pay more than twice what the optimal clairvoyant would have paid. More interesting is that using randomization reduces this factor of 2 down to e/(e-1), which is about 1.58. And it turns out that both of these bounds are provably optimal.”
Karlin and various co-authors applied this style of analysis to achieve optimally competitive algorithms for a number of problems including memory management, load balancing, and transmission control protocol (TCP) acknowledgement.
A favorite topic of Karlin’s throughout her career has been probabilistic algorithms and analysis. One of her most impactful results in this area emerged from a collaboration with Yossi Azar, Andrei Broder and Eli Upfal on “Balanced Allocations,” which has come to be known as the “power of two choices.” This paper considers a basic “balls-in-bins” problem: It has long been known that when n balls are tossed one at a time into n bins, independently and uniformly at random, the expected value of the maximum load is about log(n)/loglog(n). Karlin and her co-authors showed that if, when placing each of the n balls into a bin, two random bins are selected and then the ball is placed in the less loaded bin, this maximum load drops to loglog(n) — an exponential improvement.
“Balls-in-bins problems are ubiquitous in computer science,” said Karlin, “and the ‘power of two choices’ paradigm has turned out to be useful in a variety of applications, from hashing and load balancing to network and data center management.”
In 1999, Karlin became excited by work on “Competitive Auctions and Digital Goods” that her student Jason Hartline (Ph.D., ‘03) had done with Andrew Goldberg and Andrew Wright during a summer internship at InterTrust Technologies’ STAR Lab. That work, which recently earned the 2021 SIGECOM Test of Time Award for “initiating a long and fruitful line of work in approximately revenue-optimal auction design in prior free settings,” inspired Karlin to turn her attention to auction theory and, more generally, the field of mechanism design. Mechanism design applies the tools of game theory to design systems in which rational participants intent on maximizing their own self-interest will also achieve the designer’s intended goal. Potential applications run the gamut from network traffic routing and scheduling tasks in the cloud, to ecommerce and search engine advertising — all of them enabled by the rise of the internet.
“Nowadays, pretty much any real algorithmic problem that arises in a distributed setting is in fact a mechanism design problem,” Karlin noted.
The implications of Karlin’s work in this area extends to multiple industries in which the provision of goods and services depends on finding the ideal balance between pricing and profit given available resources and consumer motivations. For example, in joint work with Shuchi Chawla, Nikhil Devanur and Balasubramanian Sivan, she considered the design of “pay-per-play” pricing of songs, apps, video games and other software for consumers whose value for such a digital good evolves as they use it according to a stochastic process known as a martingale. In contrast to the standard approach to selling goods, where the buyer purchases an item and then can use it an unlimited number of times, in this form of pricing the buyer pays a little bit each time they want to use it. This enables a seller to extract different amounts of money from different consumer types based on how long and to what extent they remain interested in the product, as opposed to the one-price-fits-all model.
“We showed that using a free trial period followed by a fixed posted price for each use yields near-optimal profit for such digital goods — and one which is, generally speaking, significantly higher than that obtained using other pricing methods,” Karlin said. “Pay-per-play has the added advantage of being risk-free to the consumer, since they can stop buying that app, game or song whenever they lose interest.”
In another paper, Karlin worked with Kira Goldner (Ph.D. ‘19), Amos Fiat and Elias Koutsoupias to solve the “FedEx problem,” wherein the seller has to balance maximizing revenue with customer expectations related to service quality and cost. Sellers tend to have a variety of options available to them for responding to price sensitivity and other customer-driven factors, including market segmentation, differential pricing, or lower-cost (and presumably lower-quality) versions of a product or service. Karlin and her collaborators came up with the optimal approach for the seller to extract maximum revenue in instances where customer values such as desired delivery time or other factors are drawn from known probability distributions.
Despite its name, like many of the questions Karlin has tackled in her career, the results on the FedEx problem are more broadly applicable to a variety of domains. This also holds true for her work on novel algorithmic mechanisms for maximizing social welfare in the pricing of goods with the prospect of resale with Goldner, Fiat, Alon Eden and Michal Feldman. In this case, the team addressed a scenario known as the interdependent value setting, which captures the fact that information one prospective buyer has about goods being sold may significantly affect the value other buyers have for those goods. The researchers used the example of auctioning off oil drilling rights on a piece of land, where geologic surveys done by one prospective buyer contain information that could significantly sway the interests of another prospective buyer.
The decades-old economics literature on this problem had yielded strong impossibility results, even in extremely simple scenarios such as selling a single item. Karlin and her co-authors were able to circumvent these negative results using a combination of randomization and approximation, obtaining the first positive results for a broad class of combinatorial auctions — and earning the award for Best Paper with a Student Lead Author at the 20th Conference on Economics and Computation (EC ‘19).
Other problems Karlin has considered include pricing and resource allocation in the cloud, sponsored search auction design and analysis, minimizing cost in procurement auctions such as “buying” a shortest path in a network or a spanning tree, strategic behavior of Bitcoin miners, and the design of auctions that achieve near-optimal performance in the presence of a Bayesian prior without actually knowing the prior. Inspired by the Mechanism Design for Social Good initiative, initially founded by her former student Goldner and Rediet Abebe, going forward, Karlin aims to build on past work and apply her expertise in mechanism design and modern matching markets to domains such as health care, energy, and the gig economy.
Along with advancing new algorithms for mechanism design, Karlin is also interested in pushing the field forward by addressing long-standing problems that underpin the theory and practice of computing. In what is likely to be the most influential result of her career, last year she worked with current Ph.D. student Nathan Klein and faculty colleague Shayan Oveis Gharan to achieve the first improved approximation algorithm for the Traveling Salesperson Problem (TSP) in over 40 years. TSP — which aims to find the most efficient route across multiple points and back to the start — is a fundamental question with real-world implications ranging from transportation and logistics, to genomics and circuit board design. TSP is also a cornerstone problem in theoretical computer science, the study of which has led to the development of many foundational algorithmic techniques.
“TSP is NP-complete, so the holy grail has been to design an efficient approximation algorithm that is guaranteed to get a nearly optimal solution,” Karlin explained. “In the 70s, researchers discovered an algorithm that always guarantees a solution whose cost is never more than 50% higher than optimum. Nathan, Shayan and I were finally able to surpass the 50% threshold, which hopefully will pave the way for even greater progress down the road.”
According to Oveis Gharan, Karlin stands out not only for being a gifted theoretician but also for her eagerness to embrace unfamiliar ideas and learn alongside her students.
“I have worked with a few senior researchers in my career, but Anna is perhaps the only one who gets so excited and spends a significant amount of time learning new ideas and new techniques outside of her comfort zone,” Oveis Gharan said. “This is a great motivation for graduate students who are just starting out to learn new techniques together with their advisor and to build their self-confidence as researchers.”
Karlin’s enthusiasm for helping students to explore new ideas inspired her to co-author the book Game Theory, Alive with mathematician Yuval Peres. Published by the American Mathematical Society in 2017, the book presents the mathematical concepts that underpin game theory in an approachable and engaging way, including anecdotes, illustrations, and profiles of the individual mathematicians, statisticians, and economists credited with advancing the field.
“Game theory’s influence is felt in a wide range of disciplines, and the authors deliver masterfully on the challenge of presenting both the breadth and coherence of its underlying world-view,” Cornell professor Jon Kleinberg wrote. “The book achieves a remarkable synthesis, introducing the reader to the blend of economic insight, mathematical elegance, scientific impact, and counter-intuitive punch that characterizes game theory as a field.”
Karlin’s research career began at Stanford University, where she earned her bachelor’s degree in applied mathematics before earning her Ph.D. in computer science. Following a postdoc at Princeton University, Karlin spent six years as a principal scientist at DEC’s Systems Research Center. While it became clear early on that she would be a “rock star” in the field of computing, Karlin was also an actual rock star wannabe; in 1993, she took part in the first musical performance to be live-streamed on the internet with the band Severe Tire Damage — although, as Karlin herself says with a smile, “just because we were first doesn’t mean we were any good.”
Shortly thereafter, she left Palo Alto for Seattle to join the UW as a visiting professor in 1994. She made the move permanent two years later.
“I’m so honored to receive this recognition,” said Karlin. “All I can say is that my good fortune is due to the many brilliant colleagues and students I’ve had the pleasure of collaborating with during my career.”
Karlin is one of 120 leading scholars and scientists to be elected full members of the NAS and among a record-high 59 women recognized in the 2021 class. The Academy elected another 30 non-voting international members, bringing the total honorees to 150.
“We are incredibly proud of Anna for being elected to the National Academy of Sciences,” said Magdalena Balazinska, professor and director of the Allen School. “Anna is a tremendously talented researcher. She’s also a dedicated colleague and deeply caring teacher and mentor. She’s an inspiration to all of us.”
Karlin was previously elected Fellow of the American Academy of Arts & Sciences and of the Association for Computing Machinery. She is the fifth Allen School faculty member to be elected a member of the National Academies, joining professors Tom Anderson, Ed Lazowska, and Hank Levy and professor emeritus Susan Eggers — all of whom were inducted into the National Academy of Engineering.
Ten Allen School students earned recognition from the National Science Foundation (NSF) as part of its latest round of Graduate Research Fellowship Program (GRFP) awards, which honor outstanding students who are pursuing full-time research-based degrees with the potential to produce innovative contributions in science and engineering.
Since 1952, this prestigious competition has provided support for graduate education in NSF-supported STEM (science, technology, engineering and mathematics) disciplines. The Allen School honorees — eight Ph.D. students and two undergraduate students — were recognized in the “Comp/IS/Engr” or other computing-related categories.
Johnson’s research focuses on structural properties in systems, like the bistability demonstrated in leaf-out origami, to create low-power and small-scale robots optimized for resource constrained applications. He utilizes various reinforcement learning algorithms to simulate the locomotion of structures in open-source physics engines to discover energy-efficient control systems for structures with complex energy landscapes. He aims to create battery-less micro robots that are designed to operate autonomously for prolonged periods of time, be quickly self-adapting and be implementable as the sensory notes in a swarm algorithm. Origami robots have potential applications in many environments requiring both low-power and small-scaled devices, like in the deployment of space rovers for interplanetary exploration.
Second year Ph.D. student Maruchi Kim, who also works with Gollakota, received a fellowship for his research at the intersection of human-computer interaction (HCI) and wireless audio systems.
Kim’s work aims to take audio technology beyond noise-canceling and transparency, and expand upon existing hearing aids and earbuds to augment the sound environment. His goal is to interactively attenuate, amplify and decorate specific sound sources of his own choosing, using noninvasive neural signals. Kim will integrate ideas from neural engineering and deep learning and embed them into a novel wireless audio system capable of user-selectable audio amplification and attenuation. This technology could ultimately make its way into hearing aids and audio wearables to give people a more enriched hearing experience via a natural amplification of sounds that they want to hear.
Nanavati is interested in developing models of human behavior and integrating them into robot planning to enable smooth human-robot collaboration. As robots are increasingly deployed in dynamic and uncertain human environments, situations will arise that they are not equipped to handle. For example, consider an office robot. Due to limitations in hardware and computation, environmental uncertainties, or a lack of domain knowledge, the robot might get lost, get knocked over, etc. Nanavanti’s aim is to develop mathematical models and frameworks that enable a robot to effectively ask for help. This would require it to reason about humans’ help-giving behaviors and its own capacities to autonomously determine who, when and how to ask for help. Doing so will enable robots to adapt to dynamic and unexpected scenarios by switching between being autonomous and collaborative.
Ruth explores applications of computing tools to improve the quality and accessibility of health care. He aims to apply computer and engineering principles to expand access to ischemic stroke screening, specifically carotid artery stenosis and systemic embolism. Ruth will accurately extract pulse waveforms from smartphone video of the face and develop computer vision and signal processing advancements, building algorithms to compute pulse onset and peak delay for variable heart rate and noisy signals. He will also build a wearable ultrasound sensor that will require high precision timing, safe power management and embedded processing. After graduating this June with a degree in computer engineering and bioengineering, Ruth will enter the computer science Ph.D. program at Stanford University in the fall.
Strandquist’s research is in analyzing electrocorticography (ECoG) recordings of natural behaviors. These recordings can be used in neural decoding, which can enable brain computer interfaces to restore critical functions such as communication and mobility to people with neurological diseases. Strandquist found that while state-of-the-art neural decoders have significant potential, they are primarily tested on clean data obtained in controlled laboratory conditions. In reality, neural activity in freely-behaving humans is unstructured and non-stationary over time, which can impede brain computer interfaces from being deployed for use outside the lab. Her aim is to design a pipeline for real-time neural decoding of natural human behaviors from unstructured data that is viable for long-term use.
Weinberger’s work is at the intersection of machine learning and medicine. Single cell RNA sequencing (scRNA-seq) technologies offer an unprecedented opportunity to effectively characterize cellular level heterogeneity in health and disease, thereby opening a new avenue to gain insights into poorly understood diseases such as Alzheimer’s. Unfortunately, the high-dimensional nature of scRNA-seq data makes artificial intelligence (AI) models trained on it prone to overfitting, which can lead to spurious, unreproducible findings. To solve this, Weinberger is working on an AI framework called RISE (robust, interpretable single-cell embeddings). Once trained, RISE models can be used off-the-shelf by non-AI experts for arbitrary downstream tasks.
Second year Ph.D. student Ishan Chatterjee received an honorable mention from the NSF recognizing his work bridging hardware and HCI with Patel in the UbiComp Lab.
Chatterjee focuses on interface and sensing technologies to unlock smarter, more natural or more efficient experiences. For instance, in the case of frontline workers whose workflows are inherently hands-on and spatial rather than desk-based, mixed reality (MR) computing interfaces can help meet their needs. His goal is to use them to make the workflows of frontline workers more productive, more intuitive and more collaborative. His aim is to employ human-centered design processes to address frontline worker scenarios in design and manufacturing and generalize the approach through low-code authoring tools to reduce the barrier to creating MR applications. Chatterjee is also developing lightweight sensor systems and technologies to use in hands-on and social contexts for workers in people-facing roles.
La Fleur’s research focuses on learning to predict mutants that can disrupt protein-protein interactions (PPIs), which is an important step towards identifying potential pathogenic variants in the human genome, to advance our understanding of disease such as cancer and improve the well-being of individuals and society. Variation in genetic coding sequences can result in missense mutations in protein amino acids, potentially disrupting PPIs and hindering or destroying normal protein functions. Le Fleur has found current deep learning models perform poorly when predicting PPI disruptive missense mutations from non-disruptive mutations. She aims to develop surface feature input PPI models for predicting human protein interactions using available interactome scale datasets such as the Human Reference Interactome (HuRI).
Undergraduate Millicent Li received an honorable mention for her work with Patel in the UbiComp Lab on tools to support mental health.
Li’s research encompasses the realm of HCI, mobile computing and natural language processing. Her goal is to make inferences about an individual’s stress level by applying NLP and speech processing techniques to predict stress in day-to-day conversational settings. Li plans to develop a framework for predicting stress and analyze the efficacy of NLP paired with speech processing for the task in order to create wearables for quick mental health analysis. She hopes that the project will help shape future research into interdisciplinary approaches for smartphone sensing in large-scale mental health analysis for diverse populations. After graduation, she will work as an AI Resident at Facebook AI Research before starting her Ph.D. in computer science at Northeastern University in fall 2022.
Wei’s research focuses on usability of computer security and privacy (S&P). She is investigating the role gender plays in how people conceptualize S&P measures, particularly when it comes to everyday behaviors like avoiding scams, creating passwords and sharing information on social media. By analyzing stereotyped themes present in the literature, as well as empirically measuring gender stereotypes in the field, she will identify gendered differences in S&P knowledge, behaviors or threat models. This work combined with her prior research into systemic forces in other contexts — individuals reusing the same passwords for online accounts, and their privacy helplessness in the face of massive online data collection — will further her goal of equity in security and privacy and develop a user-empowering approach for conducting and sharing her research.
Recent Allen School bachelor’s alumni Siena Dumas Ang (B.S.,’17), Belinda Zou Li (B.S.,’19), Jenny Liang (B.S., ’21) and Harrison Kwik (B.S.,’18) also earned fellowships. Ang, currently a second year Ph.D. student at Princeton University, earned a fellowship in the “Life Sciences” category for her work in computational biology. She previously worked with professor Hannaneh Hajishirzi in AI before focusing on DNA data storage with Microsoft and the Molecular Information Systems Lab. Li, a first year Ph.D. student at Massachusetts Institute of Technology, previously worked with professor Luke Zettlemoyer in NLP. Liang, who will be a Ph.D. student at Carnegie Mellon University in the fall, worked with professor Amy Ko in the Code and Cognition Lab, as did fellow honoree Kwik, who is currently a second year Ph.D. student in the Technology and Social Behavior program at Northwestern University. Kwik earned a fellowship in the “STEM Education and Learning Research” category.
Allen School alumni Ben Evans (B.S.,’18, M.S.,’19) and Spencer Peters (B.S.,’19) received honorable mentions. Evans worked with professor Sham Kakade and Ph.D. student Aravind Rajeswaran on an off-policy reinforcement algorithm for machine learning. He is now a first year Ph.D. student at New York University. Peters previously worked on improving phasing algorithms with professor Walter Ruzzo and is currently a second year Ph.D. student at Cornell University.
In addition to the Allen School honorees, students from other UW departments were also recognized by the NSF in the “Comp/IS/Engr” category. In the Department of Human Centered Design & Engineering, Ph.D. students Neille-Anne Herrera Tan, Emma Jean McDonald and Jay Little Cunningham received fellowships, as well as iSchool Ph.D. students Nicole Simone Kuhn and Anastasia Schaadhart and University of Washington Tacoma Ph.D. student Kyle Bittner. Akeiylah Sammon DeWitt in HCDE was recognized with an honorable mention.
“Victor is working to enable systems to solve new problems by simply reading natural language texts that describe what needs to be done. This work generalizes existing supervised machine learning approaches that require large quantities of labeled training data, and is applicable to a wide range of language understanding tasks such as semantic parsing, question answering, and dialog systems,” said Zettlemoyer. “This work is particularly exciting because it opens up new ways to think about how to build more sophisticated language understanding systems with significant less manual engineering effort.”
Zhong’s research focuses on teaching machines to read language specifications that characterize key aspects of a problem in order to efficiently learn solutions that generalize to new problems. Machine learning models typically train on large, fixed datasets and do not generalize to even closely related problems. For example, an object recognition system requires millions of training images and cannot classify new object types after deployment, while a dialogue system requires difficult-to-annotate conversations and cannot converse about new topics that emerge. Similarly, a robot trained to clean one house learns policies that will not work in new houses.
“For many such problems, large-scale data gathering is difficult, but the collection of specifications — high-level natural language descriptions of what the problem is and how the algorithm should behave — is relatively easy,” Zhong said. “I hypothesize that by reading language specifications that characterize key aspects of the problem, we can efficiently learn solutions that generalize to new problems.”
In the previous examples when his approach is applied, the object recognition system identifies new products by reading product descriptions; the dialogue system converses about new topics by reading documents and databases about the new topics; and the robot identifies appropriate policies in the new house by reading about objects present in the new house.
Zhong’s work involves zero-shot learning, in which machine learning models trained on one distribution of data must perform well on a different, new distribution of data during inference. His most recent work spans language-to-SQL semantic parsing, game playing through reinforcement learning and task-oriented dialogue.
Given a new inference database, grounded adaptation first samples SQL queries according to database content and generates corresponding utterances. These are then parsed to synthesize utterance-SQL pairs. Finally, synthesized pairs consistent with the sampled SQL are used to adapt the parser to the new database. Grounded adaptation obtained state-of-the-art results on zero-shot SQL parsing on new databases, outperforming alternative techniques such as data augmentation.
Zhong also worked on extending reinforcement learning (RL) policies trained in one environment to new environments that have different underlying dynamics and rules without retraining. While RL is flexible and powerful, its lack of bias necessitates a large amount of training experience. In a paper published at the 2020 International Conference on Learning Representations, Zhong and his collaborators proposed a benchmark and a language-conditioned model that generalizes to new game dynamics via reading high-level descriptions of the dynamics. In order to succeed on this benchmark, an agent must perform multiple high-level reasoning steps to solve new games by cross-referencing language instruction, description of the environment dynamics, and environment observations. Zhong’s reading model achieved top zero-shot performance in new environments with previously unseen dynamics by efficiently reasoning between multiple texts and observations.
Traditional task-oriented dialogue systems focus on a single domain and cannot converse with users to solve tasks in new domains. Zhong also sought to create a dialogue system for restaurant reservations, that could use the same system to help users reserve flights. In work published at the 2019 Annual Meeting of the Association for Computational Linguistics (ACL), Zhong proposed a reading task-oriented dialogue system that solves new tasks by reading language documents on how to solve the new task. This system extracts rule specifications from documents and decides on how to respond to the user by editing extracted rules that have not yet been resolved. Zhong’s system obtained state-of-the-art performance on zero-shot dialogue tasks that require conversing with users to answer questions regarding topics not seen during training.
Zhong is now working to build intelligent, embodied assistants for every home.
“Assistants such as Siri that interpret language will need to plan not only verbal responses but physical responses. Because it is infeasible to train these assistants in every home, we must learn policies that generalize to new environments not seen during training,” Zhong said. “I hypothesize that language, given its compositional nature and the wealth of information already encoded in text, is key to enabling this kind of generalization.”
In addition to his work on learning to generalize by reading, Zhong’s research spans a variety of topics in NLP such as dialogue, question answering, semantic parsing, and knowledge base population. He has published 13 papers at top NLP and machine learning conferences.
This is the second year in a row that Apple has supported Allen School graduate students, after Jeong Joon Park was recognized with a 2020 Apple Scholars fellowship.
Kuo-Hao Zeng, a Ph.D. student working with Allen School professor Ali Farhadi, has been named a 2021 J.P. Morgan Ph.D. Fellow for his research focused on the problem of forecasting for decision making in artificial intelligence (AI). While Zeng’s work is in the context of visual forecasting and action, his algorithms and findings can be extended well beyond pixels, for example, they can facilitate policy learning for robotics applications.
Zeng aims to use his Fellowship to help AI predict future outcomes from current observations and past experiences, just as humans are able to start doing at infancy. Equipping artificial agents in robotics, finance and medicine with such capabilities can assist in important — even potentially life-saving — work. For instance, these agents can assist warning systems for autonomous vehicles in assessing long-term risk by forecasting future situations for real-time response, or act as decision making agents for trading stock where the agent has to anticipate the future trend before developing a strategy. In the medical field, they can help where a model needs to predict the tumor growth to assist the doctor in making a diagnosis.
Zeng develops deep learning models for visual forecasting and integrates these models into embodied agents for long-term decision-making tasks in interactive settings. In one paper, published at the 2020 Conference on Computer Vision and Pattern Recognition (CVPR), Zeng addressed the problem of visual reaction by teaching a virtual robot to forecast the future trajectory of objects and plan accordingly to interact with them. By playing catch with a drone in a near-photo-realistic simulated environment, Zeng adjusted the robot’s actions based on visual feedback and seeing its environment. From that he introduced a model that integrates forecasting and planning.
Zeng’s subsequent work has focused on a novel problem in robot investigation — training a robotic agent using reinforcement learning to unblock paths towards a target. So far he has achieved good results with agents performing actions and those actions changing the environment around them in a specific way. By explicitly learning the expected outcome of actions using his proposed neural physics model, Zeng showed that agents can now plan in more complicated environments. The paper describing his findings, “Pushing it out the way: Interactive Visual Navigation,” will appear at CVPR 2021.
“I am incredibly impressed with the quality of research Kuo-Hao conducts,” Farhadi said. “He is an outstanding researcher with a strong publication track record compared to his peers at his stage. He has worked on very challenging problems and has done outstandingly well so far. He is on his way to be a star researcher at the boundary of decision making and forecasting in real world and complicated situations.”
So far, Zeng has advanced visual reasoning for robot learning through eight papers published at top AI and computer vision conferences, with another one currently under submission.
Our latest undergraduate student spotlight features Olympia, Washington native Alina Chandra, who joined the Allen School community this spring quarter. Chandra was a biochemistry major with a minor in global health, but decided her talents would be more impactful in computer science. She recently was named the 2019-2020 Freshman Presidential Medalist at the University of Washington in recognition of her high GPA, rigor of classes and number of honors courses. Chandra is also a student in the university’s Honors Program on the Interdisciplinary Honors track.
Allen School: You started out with a major in biochemistry and a minor in global health, as you envisioned a career in medicine. How will the switch to computer science help you have an impact on issues you care about, like health care equality? And do you plan to stick with your global health minor?
Alina Chandra: I once asked my global health professor Stephan Gloyd how he decided that his career path — becoming a doctor and working in global health — was the best way for him to address systemic violence. He told me that the best way to have an impactful career in this area was to figure out what I enjoy doing and what I am good at, then find a way to use those skills to approach problems relating to global health/systemic violence. This advice from Dr. Gloyd is what made me decide to switch my major to computer science. There are many possible applications of CS in global health, but the ones that I am most interested in right now are data analysis, information sharing, and perhaps also technology development for low-resource settings based on community identified needs. I plan to keep my minor because I still want the educational background to understand global health problems.
Allen School: In your Freshman Medalist profile, you said that the computer sciences courses you took helped you see how the field can reshape systems and improve people’s lives. Which courses inspired you to make the switch, and can you give us a few examples of what you meant by that?
AC: Both my CSE142: Computer Programming I and CSE143: Computer Programming II, classes, taught respectively by Stuart Reges and Kevin Lin, did a nice job of connecting CS concepts to broader applications through career panels and in-class examples. In 142, professor Reges talked about the history of computing, the development of different types of programming languages, and how those languages shaped the modern-day computers which now dictate much of our lives.
CSE 143 showed me how powerful CS tools could be for shaping our understanding of the world. There was an assignment called the election simulator which was very fun on a purely conceptual, puzzle solving level, but was also really fascinating because it computed the minimum number of states required to win the presidential election for any given year, which has really interesting societal implications. Professor Lin also made a very intentional effort to teach us that computer science work had potential social justice applications and was not limited to the technology industry.
Allen School: Is there a particular area (or areas) of computer science that you are interested in exploring in the future that will enable you to combine your passion for health care, problem solving and math?
AC: I’m new to the field of computer science, and there’s a lot of areas that I haven’t explored yet. Right now, I’m really interested in learning more about machine learning, data management, and artificial intelligence.
AC: One of the advisers at the Undergraduate Research Program recommended that I apply to the Scan Design Innovations in Pain Research internship, and through that program I discovered the field of chronic pain research. I was immediately intrigued, because I could relate some of the research to my own past experience as a pediatric patient. While working under the mentorship of Dr. Jennifer Rabbits, I am working on two projects. In the first one, I manage and analyze clinical data for a study looking at the ability of post-surgical in-hospital functional ability to predict acute pain and post-surgical outcomes. In the second, I design a systematic database search and currently review abstracts for a systematic review on chronic pain development after musculoskeletal traumatic injury.
Allen School: What broader lessons have you taken away from that research experience, and do you plan to continue doing research in computer science now that you have changed majors?
AC: This experience has taught me that it is possible to conduct research that both asks fundamental questions and also has direct real-world impact. I’ve also learned that I really enjoy data analysis, and I would like to continue doing it in the future.
I am currently enjoying reading about the wealth of different types of research going on at the Allen School, and at some point, I hope to get an opportunity to do research in CS myself.
Allen School: In addition to research and your studies, you are also a writer for The Daily. How did you get involved in journalism, and what kinds of stories do you cover?
AC: My interest in journalism was spurred by the pandemic. During quarantine I started reading a lot more books and newspapers, with a particular focus on scientific communication. In addition, with so much going on in the world and no casual peer-to-peer conversations to alert me of current events, I began to more heavily rely on journalism for information. Scientific communicators Ed Yong and Allie Ward are a few of the writers who inspired me to try my hand at journalism. I mainly write for the news and science section at The Daily, and I particularly like writing articles about the wide variety of research going on at UW.
Allen School: In addition to your work with The Daily, you are editor of Voyage UW. What inspired you to focus on travel writing, and what is your favorite destination?
AC: When I came to UW, I knew that I wanted to get more experience with writing. Although I am wary of the historically Eurocentric and neocolonial tendencies of travel writing, I really like Voyage because our goal is to not look at travel from a one-dimensional rose-tinted lens, but rather to share stories from diverse individuals that help to foster cross-cultural understanding and appreciation. I also joined Voyage because it is an incredibly talented team of designers, photographers and writers, and I have learned a lot from my teammates over two years. My favorite destination is probably the Olympic mountain range here in Washington state.
Allen School: Between your classes, your research, and your extracurricular activities, how do you stay organized and keep up with everything?
AC: I have a very committed relationship with my planner.
Allen School: What are you looking forward to most as an Allen School student?
AC: I’m really looking forward to getting involved in the Allen School community!
Allen School: Outside of your studies, who or what inspires you?
AC: Nelson Mandela, Mahatma Gandhi, and Martin Luther King Jr. I do not aspire to be any of these great leaders, but I aspire to learn from their teachings about how to interact with our world in a way that makes it a better place.
Welcome to the Allen School, Alina! We are happy you joined our community!
Many approaches to measuring and supporting city-wide mobility lack the level of detail required to truly understand how pedestrians navigate the urban landscape, not to mention the quality of their journey. Transit apps can direct someone to the nearest bus stop, but they may not account for obstacles or terrain on the way to their destination. Neighborhood walkability scores that measure proximity to amenities like grocery stores and restaurants “as the crow flies” are useful if traveling by jetpack, but they are of limited benefit on the ground. Even metrics designed to assist city planners in complying with legal requirements such as the Americans with Disabilities Act are merely a floor for addressing the needs of a broad base of users — a floor that often fails to account for a wide range of mobility concerns shared by portions of the populace.
Now, thanks to the Allen School’s Taskar Center for Accessible Technology, our ability to envision urban accessibility is taking a significant turn. In a paper recently published in the journal PLOS ONE, Allen School postdoc and lead author Nicholas Bolten and Taskar Center director Anat Caspi propose a framework for modeling how people with varying needs and preferences navigate the urban environment. The framework — personalized pedestrian network analysis, or PPNA — offers up a roadmap for city planners, transit agencies, policy makers, and researchers to apply rich, user-centric data to city-scale mobility questions to reflect diverse pedestrian experiences. The team’s work also opens up new avenues of exploration at the intersection of mobility, demographics, and socioeconomic status that shape people’s daily lives.
“Too often, we think about environments and infrastructure separately from the people who use them,” explained Caspi. “It’s also the case that conventional measures of pedestrian access are derived from an auto-centric perspective. But whereas cars are largely homogeneous in their interactions with the environment, humans are not. So what we tend to get is a high-level picture of pedestrian access that doesn’t adequately capture the needs of diverse users or reflect the full range of their experiences. PPNA offers a new approach to pedestrian network modeling that reflects how humans actually interact with the environment and is a more accurate depiction of how people move from point A to point B.”
The PPNA framework employs a weighted graph-based approach to parameterize individual pedestrians’ experiences in navigating a city network for downstream analysis. The model accounts for the variability in human experience by overlaying the existing network infrastructure with constraints based on pedestrian mobility profiles, or PMPs, which represent the ability of different users to reach or traverse elements of the network.
To illustrate, the team applied PPNA to a section of downtown Seattle streets using PMPs for a stereotypical normative walking profile and one representing powered wheelchair users to determine their walkshed. A walkshed is a network-based estimation of pedestrian path reachability often used by transit agencies to define a service area in relation to a particular point of interest. But whereas the typical walkshed analysis applies a one-size-fits-all approach to evaluate access, PPNA enables multiple estimates tailored to the needs and constraints of each pedestrian profile. In this case, the researchers revealed roughly one-third of the normative walking walkshed to be inaccessible to users of powered wheelchairs.
Based on their analysis, the team introduced the concept of normalized sidewalk reach, an alternative measure of sidewalk network accessibility designed to overcome biases inherent in walkshed analysis and other conventional approaches. This new, improved analysis accounts not only for the infrastructure present in the environment, but also its applicability to the humans who use it. According to Bolten, the proliferation of user data combined with new technologies have made it possible to perform such analyses at a greater level of detail than ever before.
“Crowdsourcing efforts like the OpenStreetMap initiative and emerging computer vision techniques are improving our ability to collect detailed pedestrian network data at scale. In addition, growing use of location tracking technologies and detailed surveys are enhancing our ability to model pedestrian behaviors and needs,” explained Bolten, who began working with Caspi while earning his Ph.D. from the University of Washington Department of Electrical & Computer Engineering. “By combining more detailed pedestrian network data with more robust characterizations of pedestrian preferences, we will be able to account for needs that may have been overlooked in the past while supporting the investigation of complex urban research questions.”
Among these potential questions is how connectivity for specific pedestrian profiles intersects with demographics and/or unequal access to services and amenities. For example, the authors envision that PPNA could be applied to identify where poor connectivity for one group of users overlaps with areas designated as food deserts.
While the authors focused primarily on pedestrian mobility, given that such concerns tend to be overlooked or dealt with only superficially, they note that their approach could be expanded to cover other aspects of the pedestrian experience. Additional criteria could include the perceived pleasantness of a trip, how crowded a route may be, noise levels, and more.
“When it comes to understanding how people traverse their communities, there is a vast range of user needs and preferences that are ripe for investigation to help decision-makers move beyond a one-size-fits-many approach,” observed Caspi. “By introducing quantitative measures of diverse pedestrian concerns, we can gain deeper insights into which needs are being considered and which are not across a city’s network. We hope to continue to grow our engagement with the Disability community along with city and transportation planners, to increase use of data-driven methods towards building sustainable, accessible communities.”
This latest work is a natural outgrowth of the Taskar Center’s OpenSidewalks project, which focuses on understanding and improving the pedestrian experience through better data collection. Caspi and her team also recently launched the Transportation Data Equity Initiative in collaboration with the Washington State Transportation Center (TRAC) and other public and private partners to improve data standards and extend the benefits of new mobility planning tools to underserved groups.
In the winning paper, Suciu and then-student Abhay Jha (Ph.D., ‘12), now a senior machine learning scientist at Amazon, explored more efficient ways to construct decision diagrams — tools that simplify the task of computing the probability of a complex event. Their paper was the first to propose “query compilation to decision diagrams” and to identify the connection between static analysis on the query and the complexity of the resulting decision diagrams.
A structured query language (SQL) is a small program designed to find the most efficient results of a database query, sometimes amongst billions of records. When more information than an SQL search is needed, if each record in the relational database is annotated with a probability of being present, the query needs to return probabilities for its answers. The team’s paper uses decision diagrams for this purpose.
“Decision diagrams have been used for many years in formal verification, because they simplify the task of computing the probability of a complex event,” said Suciu, a member of the University of Washington’s Database Group. “We proposed to construct a decision diagram from an SQL query and a relational database, a process called ‘query compilation.’ The challenge in query compilation is to keep the size of the decision diagram small.”
Suciu said a naive construction leads to a decision diagram that is exponentially large in the billions of tuples in the database. This is an unacceptable situation. The paper describes more efficient ways to construct the decision diagram and characterizes precisely for which queries the resulting diagram will be efficient and which will necessarily have an exponential size.
“The determination between ‘good’ queries and ‘bad’ queries can be done only through static analysis on the query without the need to examine the database and can be adapted to several variants of the decision diagram,” Suciu said.
The winning paper was the first to identify the connection between static analysis on the query and the complexity of the resulting decision diagrams. Follow-up work by Suciu and Jha, as well as others, has expanded the study of this important connection.
Allen School professor Jeffrey Heer has been honored by the Association for Computing Machinery’s Special Interest Group on Computer-Human Interaction (SIGCHI) with election to the CHI Academy. The Academy is composed of researchers who have made substantial, cumulative contributions to the field of human-computer interaction through the development of new explorative directions and innovations and have influenced the work of their peers. He is one of only eight new members elected to the CHI Academy this year.
Heer, who holds the Jerre D. Noe Endowed Professorship at the Allen School, is a leading researcher in HCI, social computing and data visualization. He directs the Interactive Data Lab, where he and his team investigate the perceptual, cognitive, and social factors involved in making sense of large data collections. They then apply these insights to the development of novel interactive systems for visual analysis and communication, including tools to enhance interactive visualization on the web.
As a graduate student at the University of California, Berkeley, Heer created Prefuse, one of the first software frameworks for information visualization and Flare, a version of Prefuse built for Adobe Flash that was partly informed by his work in animated transitions. As a faculty member at Stanford University, he worked with Ph.D. student Mike Bostock on Protovis, a graphical toolkit for visualization that combined the efficiency of high-level visualization systems and the expressiveness and accessibility of low-level graphical systems, and Data-Driven Documents (D3), which succeeded Protovis as the de facto standard for interactive visualizations on the web. Heer also contributed to Data Wrangler, an interactive tool for cleaning and transforming raw data that was developed by researchers at Stanford and Berkeley. Heer and colleagues co-founded a startup company, Trifacta, to commercialize that work.
Since joining the Allen School faculty in 2013, Heer and his students have worked on a suite of complementary tools for data analysis and visualization design built on Vega, a declarative language for producing interactive visualizations. These tools include Lyra, an interactive environment for generating customized visualizations, and Voyager, a recommendation-powered visualization browser. Vega led to Vega-Lite, a high-level grammar for rapid and concise specification of interactive data visualizations.
“I am honored to be named to the CHI academy, with so many members that have been mentors and role models to me throughout my career,” Heer said. “And it’s a special treat to be included in this particular cohort, alongside my wonderful Ph.D. advisor Maneesh Agrawala and my inspiring colleague and former internship advisor Fernanda Viégas.”
Allen School alumnus Ming Liu (Ph.D., ‘20) received the honorable mention for the 2021 ACM SIGCOMM Doctoral Dissertation Award for Outstanding Ph.D. Thesis in Computer Networking and Data Communication from the Association for Computing Machinery’s Special Interest Group on Data Communications. The award committee recognized Liu for “identifying and enabling novel uses of programmable network devices in data centers, including an in-network computing solution for accelerating distributed applications, and a microservice execution platform running on Smart Network Interface Cards (NICs)-accelerated servers.”
Liu’s thesis, “Building Distributed Systems Using Programmable Networks,” addresses how to best leverage emerging data center network computational elements. His aim is to help overcome stagnating CPU performance, rapid growth in network bandwidth, and the increasing cost of data transfers. His thesis makes multiple contributions using a broad range of devices spanning programmable SmartNICs, programmable switches, and network-attached accelerators to improve the performance and energy profile of cloud-based applications.
“Ming’s research could have a long-term impact on how we use programmable networks inside data centers,” said his advisor, Allen School professor Arvind Krishnamurthy. “There has been a surge in the design of programmable networking hardware, but it isn’t clear as to what problems they can solve. Ming’s work has focused on identifying novel uses of these hardware devices and enabling the pervasive use of programmable network devices in data centers. Within five years, he has made outstanding progress on this front, with many projects at the intersection of distributed systems and networking.”
Three components of Liu’s thesis advanced improvements to the performance and energy profile of cloud-based applications. IncBricks is a hardware-software co-designed system that supports caching in the network using a programmable network middlebox. With this, Liu provides significant performance benefits such as reducing end-host costs, lowering latency, and improving throughput. His iPipe project is an actor-based framework for offloading distributed applications on SmartNICs. iPipe allows a SmartNIC to be safely multi-programmed, with a real-time scheduler, process migration, and resource isolation mechanisms providing security and portability across different hardware NIC designs. Liu also developed E3, a microservice execution platform that can opportunistically move computation onto a SmartNIC to yield massive energy savings — a valuable contribution given that data centers threaten to overwhelm the nation’s power grid in the near future.
Liu, who earned his Ph.D. while working with Krishnamurthy and professor Luis Ceze, is currently a postdoctoral researcher at VMware and will join the University of Wisconsin-Madison in the fall as a professor in the computer science department.