Skip to main content

Recent faculty hires expand the Allen School’s leadership in machine learning, computational biology, systems security, and more

Two Greek columns of UW's Sylvan Grove surrounded by fall foliage and evergreen trees

While 18 months of pandemic-induced remote learning and research may have brought a feeling of stasis to many areas of our lives, there is one where the opposite is true: Allen School faculty hiring. Over the past two hiring cycles, the school managed to move forward via virtual campus tours and interviews conducted via Zoom, with the result that 15 new faculty members have joined or will soon be joining our community. As we return to campus and settle into familiar routines once again, we look forward to celebrating the contributions of these outstanding educators and innovators who will strengthen our leadership at the forefront of our field while building on our commitment to advancing computing for social good.  

“Our new faculty members bring expertise in core and emerging areas and will help us to expand our leadership in computing innovation and in applying computing innovation to society’s most pressing challenges,” said Magdalena Balazinska, professor and director of the Allen School. “I am excited to work alongside them to build on our tradition of delivering breakthrough research while educating the next generation of leaders in our field, forging high-impact collaborations across campus and in the broader community, and creating an environment that is supportive and welcoming to all.”

Advancing secure and scalable systems

Leilani Battle: Human-centered data management, analysis, and visualization

Leilani Battle

Leilani Battle applies a human-centered perspective to the development of scalable analytics systems to solve a range of data-intensive problems. While her research is anchored in the field of databases, Battle employs techniques from human-computer interaction and visualization to integrate large-scale data processing with interactive visual analysis interfaces. Using this integrative approach, she designs and builds intelligent exploration systems that adapt to diverse users’ needs, goals and behaviors — making it easier for people to understand and leverage data to support more effective decision making. An example is ForeCache, a prediction system designed to allow researchers to more efficiently browse and retrieve data while reducing latency via prefetching. Battle also develops techniques for evaluating the performance of exploration systems in order to build more effective models of human analysis behavior.

Battle is no stranger to the UW; she earned her bachelor’s degree in computer engineering from the Allen School in 2011 and later returned to complete a postdoc with the UW Database Group and UW Interactive Data Lab. She rejoined the school — this time as a faculty member — this past summer after spending three years as an assistant professor at the University of Maryland, College Park. Battle earned her Ph.D. from MIT in 2017 and was named one of MIT Technology Review’s Innovators Under 35 last year.

David Kohlbrenner

David Kohlbrenner: Trustworthy hardware and software

David Kohlbrenner joined the Allen School faculty as a co-director of the Security and Privacy Research Lab in fall 2020 after completing a postdoc at the University of California, Berkeley and his Ph.D. from the University of California San Diego. Kohlbrenner’s research spans security, systems, and architecture, with a particular focus on the impact of hardware design and behavior on high-level software security.

Through a series of practical projects involving real-world test cases, Kohlbrenner explores how to build trustworthy systems that are resilient to abstraction failures. His contributions include Keystone, an open-source framework for building flexible trusted execution environments (TEEs) on unmodified RISC-V platforms, and Fuzzyfox, a web browser resistant to timing attacks. The time fuzzing techniques Kohlbrenner implemented as part of the latter project were subsequently incorporated into the Chrome, Edge and Firefox browsers. Kohlbrenner’s ongoing work aims to address open problems in the prevention of risks caused by novel microarchitectural designs, expanding the capabilities of the Keystone framework, and to support secure deployment of cloud FPGAs.

Simon Peter: Data center design for reliable and energy-efficient cloud computing

Simon Peter

Simon Peter will join the Allen School faculty in January 2022 from the University of Texas at Austin, where he has spent the past six years on the faculty leading research in operating systems and networks. Peter focuses on the development and evaluation of new hardware and software that improves data center energy efficiency and availability while decreasing cost in the face of increased workloads. Much of Peter’s recent work has focused on redesigning the server network stack to dramatically lower latency and overhead while increasing throughput — ideas that have been deployed by Google on a large scale — as well as novel approaches for achieving significant performance improvements in file system and tiered memory management, low latency accelerators, and persistent memory databases. 

Peter’s current work revolves around the development of techniques for building large-scale systems with lower operational latency — potentially 1000x lower. He is also exploring the design of power-resilient systems that can function reliably in an age of increasingly volatile energy supplies. Peter is already a familiar face at the Allen School, having completed a postdoc in the Computer Systems Lab after earning his Ph.D. from ETH Zurich. He is a past recipient of a Sloan Research Fellowship, an NSF CAREER Award, a SIGOPS Hall of Fame Award, and two USENIX Jay Lepreau Best Paper Awards.

Pushing the state of the art in artificial intelligence

Simon Shaolei Du

Simon Shaolei Du: Theoretical foundations of machine learning

Simon Shaolei Du joined the Allen School in summer 2020 after completing a postdoc at the Institute for Advanced Study. Du’s research focuses on advancing the theoretical foundations of modern machine learning — with a particular emphasis on deep learning, representation learning and reinforcement learning — to produce efficient, principled and user-friendly methods for applying machine learning to real-world problems. To that end, he aims to leverage the principles that make deep learning such a powerful tool to build stronger models as well as take advantage of the structural conditions underpinning efficient sequential decision-making problems to design more efficient reinforcement learning algorithms. 

Du’s contributions include the first global convergence proof of gradient descent for optimizing deep neural networks. He also demonstrated the statistical advantage of employing convolutional neural networks over fully-connected neural networks for learning image classification, earning an NVIDIA Pioneer Award for his efforts. He has published more than 50 papers at top conferences in the field, including the Conference on Neural Image Processing Systems (NeurIPS) and the International Conference on Machine Learning (ICML). Du holds a Ph.D. in machine learning from Carnegie Mellon University. 

Abhishek Gupta: Robotics and machine learning

Abhishek Gupta

Abhishek Gupta will join the Allen School faculty in fall 2022 after completing a postdoc at MIT. He previously earned his Ph.D. from the University of California, Berkeley as a member of the Berkeley Artificial Intelligence Research (BAIR) Laboratory. Gupta’s research focuses on the development of deep reinforcement learning algorithms that will enable robotic systems to autonomously collect data and continuously learn new behaviors in real-world situations. His goal is to enable robots to function safely and effectively in human-centric, unstructured environments under a variety of conditions.

Already, Gupta has contributed to this emerging paradigm via a series of projects focused on robotic control via reinforcement learning. For example, he demonstrated algorithms for learning complex tasks via more “natural” forms of communication such as video demonstrations and human language. Gupta also designed systems that employ large-scale, uninterrupted data collection to learn dexterous manipulation tasks without intervention, while being capable of bootstrapping its own learning by leveraging only small amounts of prior data from human supervisors. In addition, he has explored techniques to enable the efficient transfer of learning across robots and tasks via exploratory and unsupervised RL algorithms, making fundamental contributions in algorithms and systems for robotic reinforcement learning. Looking ahead, Gupta aims to apply the data gathered from real-world deployment of such systems in truly human-centric environments to make robots more adaptive and capable of generalizing across a variety of tasks, objects and environments in practically relevant real world settings like homes, hospitals and workplaces. 

Ranjay Krishna

Ranjay Krishna: Visual intelligence from human learning

Ranjay Krishna will join the Allen School faculty next September from Facebook AI Research, where he is spending a year as a research scientist after earning his Ph.D. from Stanford University. Krishna’s work at the intersection of computer vision and human computer interaction draws upon ideas from the cognitive and social sciences, such as human perception and learning, to enable machines to acquire new knowledge and skills via social interactions with people — and ultimately enable people to personalize artificial intelligence systems without the need for prior programming experience.

Krishna has applied this multidisciplinary approach to produce new representations and models that have pushed the state of the art in a variety of core computer vision tasks. For example, he introduced a new category of dense, detailed computational representations of visual information, known as scene graphs, that transformed the computer vision community’s approach to image captioning, objection localization, question answering and more. Krishna introduced the technique as part of his Visual Genome project that has since become the de facto dataset for pre-training object detectors for downstream tasks. He also collaborated on the development of an AI agent that learns new visual concepts from interactions with social media users while simultaneously learning how to improve the quality of those interactions through natural language questions and ongoing implicit feedback. Krishna intends to build on this work to establish human interaction as a core component of how we train computer vision models and deploy socially capable AI.

Ludwig Schmidt

Ludwig Schmidt: Empirical and theoretical foundations of machine learning

Ludwig Schmidt joined the Allen School faculty this fall after completing a postdoc at University of California, Berkeley and spending a year as a visiting research scientist working with the robotics team at Toyota Research. He earned his Ph.D. from MIT, where he received the George M. Sprowls Award for best Ph.D. thesis in computer science for his work examining the application of approximate algorithms in statistical settings, including the reasons behind their sometimes unexpectedly strong performance in both theory and practice.

Schmidt’s current research advances the empirical and theoretical foundations of machine learning, with an emphasis on datasets, robust methods, and new evaluation paradigms for effectively benchmarking performance. For example, he and his collaborators assembled new test sets for the popular ImageNet benchmark to investigate how well current image classification models generalize to new data. The accuracy of even the best models fell by 11%–14%, which documented the extent to which distribution shift remains a major unresolved problem in machine learning that contributes to the brittleness of even state-of-the-art models. In another study, Schmidt and his colleagues effectively dispelled the prevailing wisdom around the problem of adaptive overfitting in classification competitions by demonstrating that repeated use of test sets does not lead to unreliable accuracy measurements. By combining theoretical insights with rigorous methodology, Schmidt’s goal is to ensure the machine learning systems that power emerging technologies are safe, secure, and dependable for real-world deployment. 

Yulia Tsvetkov: Natural language processing for ethical, multilingual, and public-interest applications

Yulia Tsvetkov

Yulia Tsvetkov arrived at the Allen School this past summer from Carnegie Mellon University, where she earned her Ph.D. and spent four years as a faculty member of the Language Technologies Institute after completing a postdoc at Stanford. Tsvetkov engages in multidisciplinary research at the nexus of machine learning, computational linguistics and the social sciences to develop practical solutions to natural language processing problems that combine sophisticated learning and modeling methods with insights into human languages and the people who speak them. 

Tsvetkov’s goal is to advance ethical natural language technologies that transcend individual language and cultural boundaries while also ensuring equitable access — and freedom from bias — for diverse populations of users. To that end, she and her collaborators have developed novel techniques for automatically detecting veiled discrimination and dehumanization in newspaper articles and in social media conversations, as well as tools for identifying subtle yet pernicious attempts at online media manipulation at scale while exploring how latent influences on the media affect public discourse across countries and governments. Her team is also pioneering language technologies for real-world high-stakes scenarios, including the use of socially responsible natural language analytics in child welfare decision-making. In addition, Tsvetkov and her colleagues have made fundamental contributions toward enabling more intelligent, user- and context-aware text generation with applications to machine translation, summarization, and dialog modeling. They introduced continuous-output generation, an approach to training natural language models that dramatically accelerates their training time, and constraint-based generation, an approach to incorporating fine-grained constraints at inference time from large pretrained language models to control for various attributes of generated text.

Innovating at the intersection of computing and biology

Vikram Iyer: Wireless systems, bio-inspired sensing, microrobotics, and computing for social good

Vikram Iyer

Vikram Iyer connects multiple engineering domains and biology in order to build end-to-end wireless systems in a compact and lightweight form factor that push the boundaries of what technology can do and where it can do it. He has produced backscatter systems for ultra-low power and battery-free sensing and communication, 3D-printed smart objects, insect-scale robots, and cameras and sensors small enough to be carried by insects such as beetles, moths and bumblebees. His work has a range of potential applications, including environmental monitoring, sustainable computing, implantable medical devices, digital agriculture, and wildlife tracking and conservation. Last year, he worked with Washington state’s Department of Agriculture to wirelessly track the invasive Asian giant hornet — also known as the “murder hornet” — leading to the destruction of the first nest in the United States.

Iyer, who joined the faculty this fall after earning his Ph.D. from the UW Department of Electrical & Computer Engineering, was already a familiar face around the Allen School thanks to his collaboration with former advisor — now faculty colleague — Shyam Gollakota in the Networks & Mobile Systems Lab. He earned a 2020 Paul Baran Young Scholar Award from the Marconi Society, a 2018 Microsoft Ph.D. Fellowship, and Best Paper awards from Sensys 2018 and SIGCOMM 2016 for his work on 3D localization of sub-centimeter devices and backscatter technology enabling wireless connectivity for implantable devices, respectively.

Sara Mostafavi: Computational biology and machine learning to advance our understanding and treatment of disease

Sara Mostafavi

Sara Mostafavi joined the Allen School faculty in fall 2020 after spending five years as a faculty member at the University of British Columbia. Mostafavi, who holds a Ph.D. from the University of Toronto, focuses on the development of machine learning and statistical methods for understanding the complex biological processes that contribute to human disease. Her work is highly multidisciplinary, involving collaborators in immunology, neurosciences, genetics, psychiatry, and more.

Mostafavi is particularly interested in developing computational tools that enable researchers to distinguish meaningful relationships from spurious ones across high-dimensional genomic datasets. For example, her group developed models that account for hidden confounding factors in whole-genome gene expression studies in order to disentangle cause-and-effect relationships of upstream genetic and environmental variables that may contribute to neurodegenerative disease. Using this new framework, researchers identified a group of signaling genes linked to neurodegeneration that has yielded potential new drug targets for Alzheimer’s disease. Building on this and other past work, Mostafavi and her colleagues explore the application of deep learning and other approaches to unravel contributing factors in neurodegenerative and psychiatric diseases, the relationship between genetic variation and immune response, and the causes of rare genetic diseases in children.

Jeff Nivala: Molecular programming and synthetic biology

Jeff Nivala

Jeff Nivala is a research professor in the Allen School’s Molecular Information Systems Lab (MISL), a partnership between the UW and Microsoft that advances technologies at the intersection of biology and information technology. Nivala’s research focuses on the development of scalable storage and communication systems that bridge the molecular and digital interface. Recent contributions include Porcupine, an extensible molecular tagging system that introduced the concept of “molbits,” or molecular bits, which comprise unique barcode sequences made up of strands of synthetic DNA that can be easily programmed and read using a portable nanopore device. Nivala also led the team behind NanoporeTERS, a new kind of engineered reporter protein for biotechnology applications that enables cells to “talk” to computers. The system represented the first demonstration of the utility of nanopore readers beyond the DNA and RNA sequencing for which they were originally designed.

Nivala joined the Allen School faculty this past spring after spending nearly four years as a research scientist and principal investigator in the MISL. His arrival was a homecoming of sorts, as he previously earned his bachelor’s in bioengineering from the UW before going on to earn his Ph.D. in biomolecular engineering at the University of California Santa Cruz and completing a postdoc at Harvard Medical School. He earned a place on Forbes’ 2017 list of “30 under 30” in science and holds a total of nine patents awarded or pending. 

Chris Thachuk: Molecular programming to enable biocomputing and precise assembly at the nanoscale

Chris Thachuk

Chris Thachuk combines principles from computer science, engineering and biology to build functional, programmable systems at the nanoscale using biomolecules such as DNA. His work spans the theoretical and experimental to forge new directions in molecular computation and synthetic biology. For example, in breakthrough work published earlier this year in the journal Science, Thachuk and his collaborators demonstrated a technique that, for the first time, enables the placement of DNA molecules not only in a precise location but also in a precise orientation by folding them into a small moon shape. Their approach overcame a core problem for the development of computer chips and miniature devices that integrate molecular biosensors with optical and electronic components. Previously, Thachuk developed a “molecular breadboard” for compiling next-generation molecular circuits that operate on a timescale of seconds and minutes, as opposed to hours or days. That project provides a springboard for the future development of biocomputing applications such as in situ molecular imaging and point-of-care diagnostics.

Thachuk joined the Allen School faculty after completing postdocs at Caltech and Oxford University, where he was also a James Martin Fellow at the Institute for the Future of Computing. He earned his Ph.D. from the University of British Columbia working with professor and Allen School alumna Anne Condon (Ph.D., ‘87). 

Sheng Wang: Computational biology and medicine

Sheng Wang

Sheng Wang joined the Allen School this past January after completing a postdoc at Stanford University’s School of Medicine. Wang, who earned his Ph.D. from the University of Illinois at Urbana-Champaign, focuses on the development of high-performance, interpretable artificial intelligence that co-evolves and collaborates with humans, with a particular interest in machine learning and natural language processing techniques that will advance biomedical research and improve health care outcomes.

Wang’s research has expanded human knowledge and opened up new avenues of exploration in biomedicine while advancing AI modeling at a fundamental level. For example, he developed a novel class of open-world classification models capable of generalizing predictions to new tasks even in the absence of human annotations. His work, which represented the first general framework for enabling accurate predictions on new tasks in biomedicine using limited curation, was used by a team of biologists to classify millions of single cells into thousands of novel cell types — of which most did not have any annotated cells before. He also built a biomedical rationale system that uses a biomedical knowledge graph to generate natural-language explanations of an AI model’s predictions for tasks such as drug target identification and disease gene prediction. Going forward, Wang aims to build upon this work by developing new methods for optimizing human-AI collaboration to accelerate biomedical discovery.

Educating the next generation of leaders

Ryan Maas

Ryan Maas: Data management, data science, and CS education

Ryan Maas joined the faculty last year as a teaching professor after earning his Master’s degree in 2018 working with Allen School professor and director Magdalena Balazinska in the UW Database Group. He also spent time as a research scientist at the UW eScience Institute. Maas’ research focused on scaling linear algebra algorithms for deployment on distributed database systems to support machine learning applications. He was a contributor to Myria, an experimental big data management and analytics system offered as a cloud-based service by the Allen School and eScience Institute to support scientific research in various domains.

Maas previously served as a lecturer and teaching assistant for both introductory and advanced courses in data management and data science. He also contributed to the development and teaching of a new Introduction to Data Science course for non-majors in collaboration with colleagues at the Allen School, Information School and Department of Statistics. Prior to enrolling in the Allen School, Maas began his graduate studies in astrophysics at the University of California, Berkeley after earning B.S. degrees in physics and astronomy from the UW.

Robbie Weber: Theoretical computer science and CS education

Robbie Weber

Robbie Weber joined the faculty as a teaching professor in 2020 after earning his Ph.D. working with professors Anna Karlin and Shayan Oveis Gharan in the Allen School’s Theory of Computation group. Weber’s research focuses on algorithm design for graph and combinatorial problems, with a particular emphasis on the use of classical tools to study pairing problems such as stable matching, online matching and tournament design for real-world applications.

Weber teaches an array of “theoretical and theory-adjacent” courses — from foundational to advanced — for both majors and non-majors. His goal is to make theoretical computer science accessible, interesting, and relevant to students of any discipline. Prior to joining the faculty, Weber foreshadowed his future career path by serving as an instructor or teaching assistant for a variety of Allen School courses, including Data Structures and Parallelism, Algorithms and Computational Complexity, Machine Learning, Foundations of Computing, and more. In 2019, he earned the Bob Bandes Memorial Teaching Award in recognition of his contributions to student learning inside and outside of the classroom.

Photo of Sylvan Grove columns by Doug Plummer