Skip to main content

Allen School’s Samia Ibtasam receives Google Women Techmakers Scholarship

Woman in Google sweatshirt with foggy Golden Gate Bridge in background

Samia Ibtasam, a third-year Ph.D. student in the Allen School’s Information & Communication Technology for Development (ICTD) Lab, was recognized recently by Google with a Women Techmakers Scholarship. The Women Techmakers Scholars Program — formerly known as the Anita Borg Memorial Scholarship Program — aims to advance gender equality in computing by encouraging women to become active role models and leaders in the field. Ibtasam, who works with Allen School professor Richard Anderson to increase financial and technological inclusion in low-resource communities, is one of only 20 women at universities across North America to receive a 2019 scholarship.

Ibtasam’s focus on research addressing issues in developing communities and her commitment to increasing diversity in computing were both shaped by her experiences in her native Pakistan. Before her arrival at the University of Washington in 2016, Ibtasam was the founding co-director of the Innovations for Poverty Alleviation Lab (IPAL) at the Information Technology University in Lahore. During her time at IPAL, Ibtasam focused on developing diagnostic applications and information systems to support maternal, neonatal, and child health in a country which has one of the highest infant mortality rates in the world. 

Working with the government of the Pakistani province of Punjab and the United Kingdom’s Department for International Development (DFID), Ibtasam launched Har Zindagi — Urdu for “every life,” inspired by the slogan “every life matters.” The goal of the program was to revamp the child immunization system and introduce digital health records for its citizens. One of Ibtasam’s priorities was to make the immunization card machine-readable to enable public health administrators to easily and effectively monitor child vaccinations, and to educate parents, including many with low literacy, about the recommended immunization schedule. Ibtasam also led the development of an Android-based mobile app to assist vaccinators in entering information in the field, as well as a web dashboard to track immunization coverage and retention data for policymakers. Along the way, she had to contend with a number of challenges, including lack of wireless coverage in rural areas, field workers unfamiliar with smartphone technology, family displacement, and the need to coordinate among diverse stakeholders, from the World Health Organization and UNICEF, to local government officials and app developers.

For the four years she was at ITU, Ibtasam was the only woman on the computer science faculty. She is also the first Pakistani woman to formally specialize in the field of ICTD. These early-career experiences have reinforced her belief in the need to cultivate more women as leaders and role models through programs like the Women Techmakers Scholarship.

“One thing that I did and am still continuing to work on is inspiring other young women to be technical and confident. As the only female faculty member in ITU’s CS department, I saw the need and value of being a role model and source of support for young women,” Ibtasam said. “Throughout my career, I have been in many work environments — research labs, conferences, meetings, and panels — where I was the only woman. It took courage and effort to be part of them, but it also made me resilient to stay, speak up, and represent. And while at times, there were male mentors who provided advice and support, many times doors were closed to me. So, I want to hold doors open for other women through my inspiration, my mentorship, my research, and my designed technologies.”

Ibtasam has carried this theme into her work at the Allen School, where she focuses on projects that explore the availability and adoption of digital financial services by people in developing and emerging markets — an issue that impacts more than two billion people worldwide who remain unbanked. The issue also has profound implications for women’s empowerment in societies where cultural, religious, and even legal frameworks may hinder their financial independence. 

For example, Ibtasam was the lead author of a paper exploring women’s use of mobile money in Pakistan that appeared at the 1st ACM SIGCAS Conference on Computing and Sustainable Societies (COMPASS ‘18) organized by the Association for Computing Machinery’s Special Interest Group on Computers & Society. Working alongside colleagues in the ICTD Lab, the Information Technology University in Lahore, and the Georgia Institute of Technology, Ibtasam examined how gendered barriers in resource-constrained communities affect opportunities for women to improve their circumstances through the use of digital financial services. With information gleaned from more than 50 semi-structured interviews with Pakistani women and men along with data from the Financial Inclusion Insights Survey, the team identified the socioeconomic, religious, and gendered dynamics that influence the ability of women in the country to access financial services. Based on these findings, the team proposed a set of recommendations for expanding the benefits of digital financial services to more women.

The previous year, Ibtasam and co-authors presented a paper at the ACM’s 9th International Conference on Information and Communication Technologies and Development (ICTD ‘17) that evaluated how learnability and other factors beyond access can help or hinder widespread adoption of smartphone-based mobile money applications in Pakistan. In addition, Ibtasam has explored the role of mobile money in improving financial inclusion in southern Ghana, which also appeared at COMPASS ‘18, and teamed up with members of the Allen School’s Security and Privacy Research Lab to investigate the computer security and privacy needs and practices of refugees who had recently resettled in the United States. The latter appeared at the 2018 IEEE Symposium on Security and Privacy (S&P ’18) co-sponsored by the IEEE Computer Society’s Technical Committee on Security and Privacy and the International Association for Cryptologic Research.

Ibtasam and her fellow Google scholars were invited to a retreat at the company’s headquarters in Mountain View, California in June, where they had the opportunity to engage in a variety of professional development and networking activities. Previous Allen School recipients of the Google scholarship include Ph.D. alumni Kira Goldner (2016), Eleanor O’Rourke and Irene Zhang (2015), Jennifer Abrahamson and Nicola Dell (2012), Janara Christensen and Kateryna Kuksenok (2011), and Lydia Chilton and Kristi Morton (2010).

Congratulations, Samia!

Read more →

Allen School hosts Mathematics of Machine Learning Summer School

Man in glasses pointing at white board.
Kevin Jamieson at the Mathematics of Machine Learning Summer School.

In late July and early August, the Allen School was pleased to host a very successful MSRI Mathematics of Machine Learning Summer School. Around 40 Ph.D. students from across the country — and beyond — were brought to the University of Washington for two weeks of lectures by experts in the field, as well as problem sessions and social events. The lectures were also opened up to the broader UW community and recorded for future use.

Learning theory is a rich field at the intersection of statistics, probability, computer science, and optimization. Over the last few decades the statistical learning approach has been successfully applied to many problems of great interest, such as bioinformatics, computer vision, speech processing, robotics, and information retrieval. These impressive successes relied crucially on the mathematical foundation of statistical learning.

Recently, deep neural networks have demonstrated stunning empirical results across many applications like vision, natural language processing, and reinforcement learning. The field is now booming with new mathematical problems, and in particular, the challenge of providing theoretical foundations for deep learning techniques is still largely open. On the other hand, learning theory already has a rich history, with many beautiful connections to various areas of mathematics such as probability theory, high dimensional geometry, and game theory. The purpose of the summer school was to introduce graduate students and advanced undergraduates to these foundational results, as well as to expose them to the new and exciting modern challenges that arise in deep learning and reinforcement learning.

Woman pointing at white board.
Emma Brunskill

Participants explored a variety of topics with the guidance of lecturers Joan Bruna, a professor at New York University (deep learning); Stanford University professor Emma Brunskill (reinforcement learning); Sébastien Bubeck, senior researcher at Microsoft Research (convex optimization); Allen School professor Kevin Jamieson (bandits); and Robert Schapire, principal researcher at Microsoft Research (statistical learning theory).

The summer school was made possible by support from the Mathematical Sciences Research Institute, Microsoft Research, and the Allen School, with the cooperation of the Algorithmic Foundations of Data Science Institute at the University of Washington. The program was organized by Bubeck and Adith Swaminathan of Microsoft Research and Allen School professor Anna Karlin.

Learn more by visiting the summer school website here, and check out the video playlist here!

Read more →

Allen School releases MuSHR robotic race car platform to drive advances in AI research and education

Close-up shot of blue MuSHR robotic race car with #24 and Husky graphic painted in white
An example of the MuSHR robotic race car, which people can build from scratch for research or educational purposes — or just for fun. Mark Stone/University of Washington

The race is on to cultivate new capabilities and talent in robotics and artificial intelligence, but the high barrier to entry for many research platforms means many would-be innovators could be left behind. Now, thanks to a team of researchers in the Personal Robotics Laboratory at the University of Washington’s Paul G. Allen School of Computer Science & Engineering, expert and aspiring roboticists alike can build a fully functional, robotic race car with advanced sensing and computational capabilities at a fraction of the cost of existing platforms. MuSHR — short for Multi-agent System for non-Holonomic Racing — provides researchers, educators, students, hobbyists, and hackers a vehicle to advance the state of the art, learn about robotic principles, and experiment with capabilities that are normally reserved for high-end laboratories.

“There are many tough, open challenges in robotics, such as those surrounding autonomous vehicles, for which MuSHR could be the ideal test platform,” said lab director Siddhartha Srinivasa, a professor in the Allen School. “But beyond research, we also need to think about how we prepare the next generation of scientists and engineers for an AI-driven future. MuSHR can help us answer that challenge, as well, by lowering the barrier for exploration and innovation in the classroom as well as the lab.”

MuSHR is an open-source, full-stack system for building a robotic race car using off-the-shelf and 3D-printed parts. The website contains all of the information needed to assemble a MuSHR vehicle from scratch, including a detailed list of materials, ready-to-use files for 3D-printing portions of the race-car frame, step-by-step assembly instructions and video tutorials, and the software to make the car operational. 

More complex, research-oriented robot navigation systems, such as the Georgia Tech AutoRally, cost upwards of $10,000. The more budget-friendly MIT RACECAR, the original do-it-yourself platform that inspired the UW team’s project, costs $2,600 with basic sensing capabilities. By contrast, a low-end MuSHR race car with no sensing can be assembled for as little as $610, and a high-end car equipped with a multitude of sensors can be built for around $930. 

“We were able to achieve a dramatic cost reduction by using low-cost components that, when tested under MuSHR’s use cases, do not have a significant impact on system performance compared to other, more costly systems,” noted Allen School Ph.D. student Patrick Lancaster. But don’t be fooled by the relatively modest price; what’s under the hood will appeal to even seasoned researchers, as the high-end setup may be used to support even advanced research projects.

“Most robotic systems available today fall into one of two categories: either simple, educational platforms that are low-cost but also low-functionality, or more robust, research-oriented platforms that are prohibitively expensive and complex to use,” explained Allen School master’s student Johan Michalove, one of the developers of the project. “MuSHR combines the best of both worlds, offering high functionality at an affordable price. It is comprehensive enough for professional research, but also accessible enough for classroom or individual use.”

mushr.io logo and link

MuSHR comes with a set of computational modules that enable intelligent decision making, including a software and sensor package that enables localization in a known environment, and a controller for navigating towards a goal while avoiding collisions with other objects in its path. Users can build upon these preliminary AI capabilities to develop more advanced perception and control algorithms. The name “MuSHR” was inspired by the UW’s mascot, the Huskies, because it calls to mind the mushing of a dogsled at top speed. As it happens, a MuSHR race car would outpace the dogs, achieving speeds in excess of 15 miles per hour — useful for researchers working to solve critical challenges in autonomous vehicle systems, such as high-speed collision avoidance, to make them viable for real-world use.

For students with a similar need for speed, MuSHR offers an opportunity to learn about the challenges of autonomous systems in a hands-on way. Allen School Ph.D. student and MuSHR project lead Matthew Schmittle worked with students to put the platform through its paces in the school’s Mobile Robotics course this past spring, in which 41 undergraduates used MuSHR to build a complete mobile robotic system while grappling with some of the same challenges as more seasoned researchers at the likes of Tesla, Uber, and Google.

“MuSHR introduces students to key concepts such as localization, planning, and control,” Schmittle, who served as a teaching assistant for the course, explained. “The course culminated in a demo day, where students had a chance to show how well they implemented these concepts by using their MuSHR cars to navigate a wayfinding course. We hope classrooms around the country will take advantage of MuSHR to engage more students to the exciting world of robotics and AI.”

So far, the MuSHR platform has featured in two undergraduate and two graduate-level courses at the UW. Nearly 130 students have had the opportunity to explore robotic principles using MuSHR, and the Allen School plans to offer similar classes in the future.

A group of robotic race cars in different colors against a plain white background
A fleet of MuSHR cars used by students in the Allen School’s Mobile Robotics course earlier this year. Mark Stone/University of Washington

“As a roboticist, I find MuSHR’s platform for rapid testing and refinement to be an enticing prospect for future research,” Srinivasa said. “As an educator, I am excited to see how access to MuSHR will spark students’ and teachers’ imaginations. And let’s not forget the hobbyists and hackers out there. We’re looking forward to seeing the various ways people use MuSHR and what new capabilities emerge as a result.” 

Individuals who are interested in building their own MuSHR robotic race car can download the code and schematics, and view tutorials on the MuSHR website. Educators who are interested in incorporating the MuSHR platform into their classes can email the team at mushr@cs.washington.edu to request access to teaching materials, including assignments and solutions. The team will also provide a small number of pre-built MuSHR cars to interested parties who would otherwise not have the financial means to build the cars themselves, with priority given to schools, groups, and labs whose members are underrepresented in robotics. Interested individuals or labs are encouraged to submit an application by filling out this form.

MuSHR is a work in progress, and the version released today is the lab’s third iteration of the platform. The team plans to continue refining MuSHR while employing it in a variety of research projects at the UW. In addition to Srinivasa, Michalove, Schmittle, and Lancaster, members of the MuSHR team include professor Joshua R. Smith, who holds a joint appointment in the Allen School and Department of Electrical & Computer Engineering; fifth-year master’s students Matthew Rockett and Colin Summers; postdocs Sanjiban Choudhury, Christoforos Mavrogiannis, and Allen School Ph.D. alumna and current postdoc Fereshteh Sadeghi

The MuSHR project was undertaken with support from the Allen School; the Honda Research Institute; and Intel, who contributed the Realsense on-board cameras used in the construction of the vehicles.

Read coverage of MuSHR in GeekWire here.

Read more →

Allen School’s latest faculty additions will strengthen UW’s leadership in robotics, machine learning, human-computer interaction, and more

The Allen School is preparing to welcome five faculty hires in 2019-2020 who will enhance the University of Washington’s leadership at the forefront of computing innovation, including robot learning for real-world systems, machine learning and its implications for society, online discussion systems that empower users and communities, and more. Meet Byron Boots, Kevin Lin, Jamie Morgenstern, Alex Ratner, and Amy Zhang — the outstanding scholars set to expand the school’s research and teaching in exciting new directions while building on our commitment to excellence, mentorship, and service:

Byron Boots, machine learning and robotics

Former Allen School postdoc Byron Boots returns to the University of Washington this fall after spending five years on the faculty of Georgia Institute of Technology, where he leads the Georgia Tech Robot Learning Lab. Boots, who earned his Ph.D. in Machine Learning from Carnegie Mellon University, advances fundamental and applied research at the intersection of artificial intelligence, machine learning, and robotics. He focuses on the development of theory and systems that tightly integrate perception, learning, and control while addressing problems in computer vision, system identification, state estimation, localization and mapping, motion planning, manipulation, and more. Last year, Boots received a CAREER Award from the National Science Foundation for his efforts to combine the strengths of hand-crafted, physics-based models with machine learning algorithms to advance robot learning.

Boots’ goal is to make robot learning more efficient while producing results that are both interpretable and safe for use in real-world systems such as mobile manipulators and high-speed ground vehicles. His work draws upon and extends theory from nonparametric statistics, graphical models, neural networks, non-convex optimization, online learning, reinforcement learning, and optimal control. One of the areas in which Boots has made significant contributions is imitation learning (IL), a promising avenue for accelerating policy learning for sequential prediction and high-dimensional robotics control tasks more efficiently than conventional reinforcement learning (RL). For example, Boots and colleagues at Carnegie Mellon University introduced a novel approach to imitation learning, AggreVaTeD, that allows for the use of expressive differentiable policy representations such as deep networks while leveraging training-time oracles. Using this method, the team showed that it could achieve faster and more accurate solutions with less training data — formally demonstrating for the first time that IL is more effective than RL for sequential prediction with near-optimal oracles. 

Following up on this work, Boots and graduate student Ching-An Cheng produced new theoretical insights into value aggregation, a framework for solving IL problems that combines data aggregation and online learning. As a result of their analysis, the duo was able to debunk the commonly held belief that value aggregation always produces a convergent policy sequence while identifying a critical stability condition for convergence. Their work earned a Best Paper Award at the 21st International Conference on Artificial Intelligence and Statistics (AISTATS 2018)

Driven in part by his work with highly dynamic robots that engage in complex interactions with their environment, Boots has also advanced the state of the art in robotic control.  Boots and colleagues recently developed a learning-based framework, Dynamic Mirror Descent Model Predictive Control (DMD-MPC), that represents a new design paradigm for model predictive control (MPC) algorithms. Leveraging online learning to enable customized MPC algorithms for control tasks, the team demonstrated a set of new MPC algorithms created with its framework on a range of simulated tasks and a real-world aggressive driving task. Their work won Best Student Paper and was selected as a finalist for Best Systems Paper at the Robotics: Science and Systems (RSS 2019) conference this past June.

Among Boots’ other recent contributions is the Gaussian Process Motion Planner algorithm, a novel approach to motion planning viewed as probabilistic inference that was designated “Paper of the Year” for 2018 by the International Journal of Robotics Research, and a robust, precise force prediction model for learning tactile perception developed in collaboration with NVIDIA’s Seattle Robotics Research Lab, where he teamed up with his former postdoc advisor, Allen School professor Dieter Fox.

Kevin Lin, computer science education

Kevin Lin arrives at the Allen School this fall as a lecturer specializing in introductory computer science education and broadening participation at the K-12 and post-secondary levels. Lin earned his master’s in Computer Science and bachelor’s in Computer Science and Cognitive Science from the University of California, Berkeley. Before joining the faculty at the Allen School, Lin spent four years as a teaching assistant (TA) at Berkeley, where he taught courses in introductory programming, data structures, and computer architecture. As a head TA for all three of these courses, he coordinated roughly 40 to 100 TAs per semester and focused on improving the learning experience both inside and outside the classroom.

Lin’s experience managing courses with over 1,400 enrolled students spurred his interest in using innovative practices and software tools to improve student engagement and outcomes and to attract and retain more students in the field. While teaching introductory programming at Berkeley, Lin helped introduce and expand small-group mentoring sections led by peer students who previously took the course. These sections offered a safe environment in which students could ask questions and make mistakes outside of the larger lecture. He also developed an optimization algorithm for assigning students to these sessions, with an eye toward improving not only their academic performance but also their sense of belonging in the major. As president of Computer Science Mentors, Lin grew the organization from 100 mentors to more than 200 and introduced new opportunities for mentor training and professional development.

In addition to supporting peer-to-peer mentoring, Lin uses software to effectively share feedback with students as a course progresses. For example, he helped build a system that automatically prompts students to interactively explore at predetermined intervals the automated checks which will be applied to their solution. By giving students an opportunity to compare their own expectations for a program’s behavior against the real output, this approach builds students’ confidence in their work while reducing their need to ask for clarifications on an assignment. He also developed a system for sharing individualized, same-day feedback with students about in-class quiz results to encourage them to revisit and learn from their mistakes. Lin also has taken a research-based approach to identifying the most effective interventions that instructors can use to improve student performance. 

Outside of the classroom, Lin has led efforts to expand K-12 computer-science education through pre-service teacher training and the incorporation of CS principles in a variety of subjects. He has also been deeply engaged in efforts to improve undergraduate education as a coordinator of Berkeley’s EECS Undergraduate Teaching Task Force.

Jamie Morgenstern, machine learning and theory of computation

Jamie Morgenstern joins the Allen School faculty this fall from the Georgia Institute of Technology, where she is a professor in the School of Computer Science. Her research explores the social impact of machine learning and how social behavior influences decision-making systems. Her goal is to build mechanisms for ensuring fair and equitable application of machine learning in domains people interact with every day. Other areas of interest algorithmic game theory and its application to economics — specifically, how machine learning can be used to make standard economic models more realistic and relevant in the age of e-commerce. Previously, Morgenstern spent two years as a Warren Fellow of Computer Science and Economics at the University of Pennsylvania after earning her Ph.D. in Computer Science from Carnegie Mellon University. 

Machine learning has emerged as an indispensable tool for modern communication, financial services, medicine, law enforcement, and more. The application of ML in these system introduces incredible opportunities for more efficient use of resources. However, introducing these powerful computational methods in dynamic, human-centric environments may exacerbate inequalities present in society, as they may well have uneven performance for different demographic groups. Morgenstern terms this phenomenon “predictive inequity.” It manifests in a variety of online settings — from determining which discounts and services are offered, to deciding which news articles and ads people see. Predictive inequity may also affect people’s safety in the physical world, as evidenced by what Morgenstern and her colleagues found in an examination of object detection systems that interpret visual scenes to help prevent collisions involving vehicles, pedestrians, or elements of the environment. The team found that object detection’s performance varies when measured on pedestrians with differing skin types, independent of time of day or several other potentially mitigating factors. In fact, all of the models Morgenstern and her colleagues analyzed exhibited poorer performance in detecting people with darker skin types — defined as Fitzpatrick skin types between 4 and 6 — than pedestrians with lighter skin types. Their results suggest, if these discrepancies in performance are not addressed, certain demographic groups may face greater risk of fatality from object recognition failures.

Spurred on by these and other findings, Morgenstern aims to develop techniques for building predictive equity into machine learning models to ensure they treat members of different demographic groups with similarly high fidelity. For example, she and her colleagues developed a technique for increasing fairness in principal component analysis (PCA), a popular method of dimensionality reduction in the sciences that is intended to reduce bias in datasets but that, in practice, can inadvertently introducing new bias. In other work, Morgenstern and colleagues developed algorithms for incorporating fairness constraints in spectral clustering and also in data summarization to ensure equitable treatment of different demographic groups in unsupervised learning applications. 

Morgenstern also aims to understand how social behavior and self-interest can influence machine learning systems. To that end, she analyzes how systems can be manipulated and designs systems to be robust against such attempts at manipulation, with potential applications in lending, housing, and other domains that rely on predictive models to determine who receives a good. In one ongoing project, Morgenstern and her collaborators have explored techniques for using historical bidding data, where bidders may have attempted to game the system, to optimize revenue while incorporating incentive guarantees. Her work extends to making standard economic models more realistic and robust in the face of imperfect information. For example, Morgenstern is developing a method for designing near-optimal auctions from samples that achieve high welfare and high revenue while balancing supply and demand for each good. In a previous paper selected for a spotlight presentation at the 29th Conference on Neural Information Processing Systems (NeurIPS 2015), Morgenstern and Stanford University professor Tim Roughgarden borrowed techniques from statistical learning theory to present an approach for designing revenue-maximizing auctions that incorporates imperfect information about the distribution of bidders’ valuations while at the same time providing robust revenue guarantees.

Alex Ratner, machine learning, data management, and data science

Alexander Ratner will arrive at the University of Washington in fall 2020 from Stanford University, where he will have completed his Ph.D. working with Allen School alumnus Christopher Ré (Ph.D., ‘09). Ratner’s research focuses on building new high level systems for  machine learning based around “weak supervision,” enabling practitioners in a variety of domains to use less  precise, higher-level inputs to generate dynamic and noisy training sets on a massive scale. His work aims to overcome a key obstacle to the rapid development and deployment of state-of-the-art machine learning applications — namely, the need to manually label and manage training data — using a combination of algorithmic, theoretical, and systems-related techniques.

Ratner led the creation and development of Snorkel, an open-source system for building and managing training datasets with weak supervision that earned a “Best of” Award at the 44th International Conference on Very Large Databases (VLDB 2018). Snorkel is the first system of its kind that focuses on enabling users to build and manipulate training datasets programmatically, by writing labeling functions, instead of the conventional and painstaking process of labeling data by hand. The system automatically reweights and combines the noisy outputs of those labeling functions before using them to train the model — a process that enables users, including subject-matter experts such as scientists and clinicians, to build new machine learning applications in a matter of days or weeks, as opposed to months or years. Technology companies, including household names such as Google, Intel, and IBM, have deployed Snorkel for content classification and other tasks, while federal agencies such as the U.S. Department of Veterans Affairs (VA) and the Food and Drug Administration (FDA) have used it for medical informatics and device surveillance tasks. Snorkel is also deployed as part of the  Defense Advanced Research Projects Agency (DARPA) MEMEX program, an initiative aimed at advancing online search and content indexing capabilities to combat human trafficking; in a variety of medical informatics and triaging projects in collaboration with Stanford Medicine, including two that were presented in Nature Communications papers this year; and a range of other open source use cases.

Building on Snorkel’s success, Ratner and his collaborators subsequently released Snorkel MeTaL, now integrated into the latest Snorkel v0.9 release, to apply the principle of weak supervision to the training of multi-task learning models. Under the Snorkel MeTaL framework, practitioners write labeling functions for multiple, related tasks, and the system models and integrates various weak supervision sources to account for varied — and often, unknown — levels of granularity, correlation, and accuracy. The team approached this integration in the absence of labels as a matrix completion-style problem, introducing a simple yet scalable algorithm that leverages the dependency structure of the sources to recover unknown accuracies while exploiting the relationship structure between tasks. The team recently used Snorkel MeTaL to help achieve a state-of-the-art result on the popular SuperGLUE benchmark, demonstrating the effectiveness of programmatically building and managing training data for multi-task models as a new focal point for machine learning developers.  

Another area of machine learning in which Ratner has developed new systems, algorithms, and techniques is data augmentation, a technique for artificially expanding or reshaping labeled training datasets using task-specific data transformations in which class labels are preserved. Data augmentation is an essential tool for training modern machine learning models, such as deep neural networks, that require massive labeled datasets. As a manual task, it requires time-intensive manual tuning to achieve the compositions required for high-quality results, while a purely automated approach tends to produce wide variations in end performance. Ratner co-led the development of a novel method for data augmentation that leverages user domain knowledge combined with automation to achieve more accurate results in less time. Ratner’s approach enables subject-matter experts to specify black-box functions that transform data points, called transformation functions (TFs), then applies a generative sequence model over the specified TFs to produce realistic and diverse datasets. The team demonstrated the utility of its approach by training its model to rotate, rescale, and move tumors in ultrasound images — a result that led to real-world improvements in mammography screening and other tasks.

Amy Zhang, human-computer interaction and social computing

Amy Zhang will join the Allen School faculty in fall 2020 after completing her Ph.D. in Computer Science at MIT. Zhang focuses on the design and development of online discussion systems and computational techniques that empower users and communities by giving them the means to control their online experiences and the flow of information. Her work, which blends deep qualitative needfinding with large-scale quantitative analysis towards the design and development of new user-facing systems, spans human-computer interaction, computational social science, natural language processing, machine learning, data mining, visualization, and end-user programming. Zhang recently completed a fellowship with the Berkman Klein Center for Internet & Society at Harvard University. She was previously named a Gates Scholar at the University of Cambridge, where she earned her master’s in Computer Science.

Online discussion systems have a profound impact on society by influencing the way we communicate, collaborate, and participate in public discourse. Zhang’s goal is to enable people to effectively manage and extract useful knowledge from large-scale discussions, and to provide them with tools for customizing their online social environments to suit their needs. For example, she worked with Justin Cranshaw of Microsoft Research to develop Tilda, a system for collaboratively tagging and summarizing group chats on platforms like Slack. Tilda — a play on the common expression “tl;dr,” or “too long; didn’t read” — makes it easier for participants to enrich and extract meaning from their conversations in real time as well as catch up on content they may have missed. The team’s work earned a Best Paper Award at the 21st ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2018)

The previous year at CSCW, Zhang and her MIT colleagues presented Wikum, a system for creating and aggregating summaries of large, threaded discussions that often contain a high degree of redundancy. The system, which draws its name from a portmanteau of “wiki” and “forum,” employs techniques from machine learning and visualization, as well as a new workflow Zhang developed called recursive summarization. This approach produces a summary tree that enables readers to explore a discussion and its subtopics at multiple levels of detail according to their interests. Wikipedia editors have used the Wikum prototype to resolve content disputes on the site. It was also selected by the Collective Intelligence for Democracy Conference in Madrid as a tool for engaging citizens on discussions of city-wide proposals in public forums such as Decide Madrid. 

Zhang has also tackled the dark side of online interaction, particularly email harassment. In a paper that appeared at last year’s ACM Conference on Human Factors in Computing Systems (CHI 2018), Zhang and her colleagues introduced a new tool, Squadbox, that enables recipients to coordinate with a supportive group of friends — their “squad” — to moderate the offending messages. Squadbox is a platform for supporting highly customized, collaborative workflows that allow squad members to intercept, reject, redirect, filter, and organize messages on the recipient’s behalf. This approach, for which Zhang coined the term “friendsourced moderation,” relieves the victims of the emotional and temporal burden of responding to repeated harassment. A founding member of the Credibility Coalition, Zhang has also contributed to efforts to develop transparent, interoperable standards for determining the credibility of news articles using a combination of article text, external sources, and article metadata. As a result of her work, Zhang was featured by the Poynter Institute as one of six academics on the “frontlines of fake news research.”

This latest group of educators and innovators join an impressive — and impressively long — list of recent additions to the Allen School faculty who have advanced UW’s reputation across the field. They include the addition of Tim Althoff, whose research applies data science to advance human health and well-being; Hannaneh Hajishirzi, an expert in natural language processing; René Just, whose research combines software engineering and security; Rachel Lin and Stefano Tessaro, who introduced exciting new expertise in cryptography; Ratul Mahajan, a prolific researcher in networking and cloud computing; Sewoong Oh, who is advancing the frontiers of theoretical machine learning; Hunter Schafer and Brett Wortzman, who support student success in core computer science education; and Adriana Schulz, who is focused on computational design for manufacturing to drive the next industrial revolution. Counting the latest arrivals, the Allen School has welcomed a total of 22 new faculty members in the past three years.

A warm Allen School welcome to Alex, Amy, Byron, Jamie, and Kevin!

Read more →

Allen School’s Tim Althoff and Carlos Guestrin recognized by SIGKDD for research advancing the field of data science

Tim Althoff (left) and Carlos Guestrin

Allen School professors Tim Althoff and Carlos Guestrin were recognized by the Association for Computing Machinery’s Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD) this week for seminal research contributions made roughly a dozen years apart.

At the KDD 2019 conference held in Anchorage, Alaska, Althoff received the 2019 SIGKDD Doctoral Dissertation Award for his paper “Data Science for Human Well-being” that presented new techniques for turning data generated by mobile and wearable devices into actionable insights to benefit individuals and society. And highlighting an example of enduring impact, SIGKDD recognized Guestrin and his co-authors with the 2019 Test of Time Award for their paper “Cost-effective Outbreak Detection in Networks” presenting a new methodology for detecting outbreaks in a network in the most efficient and effective manner.

Althoff, who earned his Ph.D. from Stanford University in 2018, devoted his graduate research to exploring how the multitude of data points about people’s offline behavior captured by mobile and wearable technologies can be used to improve physical and mental health and well-being. He and his collaborators developed a series of novel computational methods that leverage the proliferation of personal devices — and the digital traces of billions of human actions they capture — and techniques from data mining, social network analysis, and natural language processing to address significant health-related issues. For example, Althoff and his colleagues examined smartphone accelerometer data for more than 717,000 people across 111 countries — a planetary-scale analysis covering 68 million days of physical activity. The study uncovered a previously unknown health inequality about physical activity, with significantly reduced activity levels among the female portion of some populations. The researchers coined the term “activity inequality” to describe this difference in physical activity within countries and found that it serves as a better predictor of obesity prevalence than average activity volume. Their results also showed that features of the built environment, such as urban walkability, play an important role in reducing activity inequality and the associated gender gap.

Althoff was also one of the lead researchers behind the largest study to date to objectively measure the impact of sleep on performance in the wild by correlating data on 3 million nights of sleep measured by wearable devices with 75 million performance tasks in the form of search-engine interactions. These interactions, including keystroke speed and clicks, enabled the researchers to analyze how cognitive performance varies throughout the day tied to circadian rhythms, whether a subject is a “morning person” or a “night owl,” and the duration and timing of prior sleep. The team also developed a statistical model for determining the impact of insufficient sleep that found two consecutive nights of fewer than six hours of sleep leads to performance impairments lasting six days. 

Tim Althoff (center) receiving his SIGKDD Doctoral Dissertation Award from Michael Zeller (left) and Jian Pei

Althoff and his colleagues later developed a statistical tool, Time-varying, Interdependent, and Periodic Action Sequences (TIPAS), that models the complex dependencies and periodic recurrence of human behaviors associated with essential activities such as eating, exercise, and sleep. Testing their approach on 12 million actions taken by 20,000 users over the course of 17 months, Althoff’s team demonstrated that the model could be used to accurately predict future actions to enable targeted health interventions and app personalization.

In another example of how data analysis can yield actionable insights, Althoff and his colleagues also embarked on the largest-ever quantitative study of mental health counseling conducted via text messaging in an effort to identify the characteristics of successful counseling conversations. Applying techniques from natural language processing such as sequence-based conversation models, language model comparisons, message clustering, and word frequency analyses, the researchers sought to identify the conversational strategies associated with positive outcomes. They discovered a set of factors that were common among the most successful counselor-client interactions, including a counselor’s adaptability to the direction of the conversation; the extent to which they tailored their responses to the individual while using creative and personalized, rather than generic, language; and the ability to focus quickly on the core issue in order to move toward collaboratively solving the problem while facilitating a positive perspective change on the part of their client. The project represented the first time that researchers had connected large-scale data with labeled conversation outcomes to reveal the most effective conversational strategies in mental health counseling.

Guestrin and his then-colleagues at Carnegie Mellon University originally presented their paper on network outbreak detection at KDD 2007. Their work addressed the problem of how to optimize the placement of sensors or nodes to facilitate rapid detection of an outbreak or “information cascade” that initiates from a single node and spreads across the network — a question of both theoretical and practical significance. A network, in this case, could be physical, as in a water distribution system, or virtual, as in a social network or the blogosphere; an outbreak might consist of the spread of a physical contaminant within the system, or the viral spread of information online. The team presented a new algorithm, Cost-Effective Lazy Forward selection (CELF), that determines the near-optimal placement of sensors to detect such outbreaks, whether physical or virtual, by exploiting the principle of submodularity — the quality of exhibiting diminishing returns. 

The SIGKDD Test of Time Award presentation, left to right: Michael Zeller, Christos Faloutsos, Jure Leskovec, Carlos Guestrin, and Jian Pei

The central idea, drawing upon one of the paper’s real-world examples, is that the consumption of a blog post (placing of a sensor) yields more new information after having read only a few other blog posts than it does after having read many posts. The goal was to efficiently achieve a solution that minimizes cost — in the aforementioned instance, minimizing the time it takes to read multiple blog posts before detecting when a piece of information has gone viral. In another potential use case cited in the paper, the same algorithm could be used to speed the detection of a contaminant in the water supply before it reaches the broader population. Using this approach, the researchers were able to determine the near-optimal placement of sensors that enabled them to detect network outbreaks roughly 700 times faster than using a simple greedy algorithm. They also demonstrated that CELF could achieve speed-ups and storage savings of several orders of magnitude at scale. Last but not least, the team showed that the same methodology could be applied to the study of complex, application-specific questions around multicriteria tradeoff, cost-sensitivity analyses, and generalization behavior.

Guestrin’s co-authors on the paper include then-Ph.D. students Jure Leskovec, now a faculty member at Stanford University, and Andreas Krause, now a faculty member at ETH Zurich; CMU professors Christos Faloutsos and Jeanne VanBriesen; and Natalie Glance, former Senior Research Scientist at Nielsen BuzzMetrics and present Vice President of Engineering at DuoLingo.

Congratulations to Tim and to Carlos and his colleagues on their outstanding achievements!

Read more →

Allen School’s Kuikui Liu and Shayan Oveis Gharan earn Best Paper Award at STOC 2019 with new technique for counting the bases of a matroid

Kuikui Liu (left) and Shayan Oveis Gharan

First-year Ph.D. student Kuikui Liu and professor Shayan Oveis Gharan of the Allen School’s Theory of Computation group earned a Best Paper Award from the Association for Computing Machinery’s 51st annual Symposium on the Theory of Computing (STOC 2019) by presenting a novel approach for counting the bases of matroids and solving a 30-year-old conjecture by Mihail and Vazirani that will have wide-ranging implications for the theory and practice of computation and mathematics. They and their co-authors resolved this fundamental open problem by combining seemingly unrelated areas of mathematics and theoretical computer science, namely Hodge theory for combinatorial geometries, analysis of Markov chains, and high dimensional expanders. The team’s work has a number of real-world applications — from quantifying the reliability of communication networks and error-correcting codes used in data transmission, to performing diverse feature vector selection for training machine learning models.

In their winning paper, “Log-Concave Polynomials II: High-Dimensional Walks and an FPRAS for Counting Bases of a Matroid,” Liu, Oveis Gharan, and co-authors Nima Anari of Stanford University and Cynthia Vinzant of North Carolina State University describe the first fully polynomial randomized approximation scheme (FPRAS) for the problem. Their algorithm, which is based on a simple two-step Markov Chain Monte Carlo process, can be applied to counting the bases of any matroid given by an independent set oracle.

The question is relevant to a variety of domains and practical situations that call for the assignment of resources or transmission of information. For example, a company has a set of jobs it needs to complete and a set of machines capable of completing a subset of those jobs. Assuming each machine is assigned one job and taking into account the probability of each machine failing independently, the authors’ approach can be used to obtain an efficient algorithm which approximates up to a 1 percent error the probability that all jobs will be completed. In another example, a sender is transmitting an encrypted message for which there is a certain probability that each bit of the message will be erased independently. Assuming the code is a linear code, the team’s algorithm provides an efficient procedure to approximate up to a 1 percent error the probability that the receiver can recover the erased bits.

In addition to its application in network reliability, data transmission, and machine learning, the approach put forward by Liu, Oveis Gharan, and their collaborators can be used to sample random spanning forests of a graph — a development that will be of great interest to researchers in combinatorics and probability and to those who study credit networks. The researchers also proved in their paper that the bases exchange graph of any matroid has edge expansion of at least 1, affirming the Mihail-Vazirani conjecture first put forward three decades ago.

The team presented its work at the STOC 2019 conference, which is organized by the ACM Special Interest Group on Algorithms and Computation Theory (ACM SIGACT), in Phoenix, Arizona in June. Read the full research paper here

Congratulations to Kuikui, Shayan, and the entire team!

Read more →

Professor Pedro Domingos receives IJCAI John McCarthy Award for excellence in artificial intelligence research

Allen School professor Pedro Domingos has been selected as the 2019 recipient of the John McCarthy Award from the International Joint Conference on Artificial Intelligence (IJCAI). The award, which is named for one of the founders of the field of AI, recognizes established, mid-career researchers who have amassed a track record of significant research contributions that have been influential in advancing the field. Domingos is being honored by the IJCAI for his multiple contributions in machine learning and data science and for advancements in unifying logic and probability.

Domingos focuses on ways to enable computers to discover new knowledge, learn from experience, and extract meaning from data with little or no help from people. A prolific researcher and speaker on the topics of AI, machine learning, and data mining, he has authored more than 200 technical papers on these and other subjects. In 2015, Domingos published The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, a book intended for a broad audience that explores in a comprehensive yet accessible way how developments in machine learning impact on people’s everyday lives and the potential — and potential pitfalls — of living in the “age of algorithms.” 

Domingos himself is partly responsible for ushering in this exciting new era through a series of research projects that represents the state of the art. For example, he and former student Abe Friesen (Ph.D., ‘17) developed a new algorithm, known as Recursive Decomposition into locally Independent Subspaces (RDIS), that outperformed existing techniques for solving a broad class of nonconvex optimization problems. Their work, which earned the duo a Distinguished Paper Award at IJCAI-15, can be applied in a variety of domains, including computer vision, machine learning, and robotics. That same year, Domingos received the SIGKDD Test of Time Award, along with his collaborator and Allen School alumnus Geoff Hulten (Ph.D., ‘05), from the Association for Computing Machinery’s Special Interest Group on Knowledge Discovery and Data Mining for the Very Fast Decision Tree learner (VFDT), an algorithm for mining high-speed data streams. Originally published in 2000, the VFDT algorithm remained the fastest decision tree learner available 15 years after its release. Domingos and Hulten later expanded the VFDT algorithm into the Very Fast Machine Learning (VFML) toolkit for mining high-speed data streams and vast data sets.

In 2014, Domingos earned the SIGKDD Innovation Award — the highest award for technical excellence in the field of data mining and data science — for foundational contributions to data stream analysis, cost-sensitive classification, adversarial learning, Markov logic networks, and viral marketing and information integration. Among his early-career achievements recognized by SIGKDD were MetaCost, a novel method for cost-sensitive classification that can be applied to a wide array of data mining problems. MetaCost, which earned a Best Paper Award for Fundamental Research when it was released, offered an improvement over existing practices such as stratification or making individual algorithms cost-sensitive. In another contribution that also earned a Best Paper Award for Fundamental Research, Domingos explored Occam’s razor as it is applied to data mining and effectively dismisses one accepted interpretation — that simplicity leads to greater accuracy — while building a case for a refined alternative to simplicity as a goal in itself. In his refined interpretation, Domingos presents an argument for decoupling discovery of the most accurate model from the extraction of the most comprehensible approximation of it. 

Domingos’ other pioneering contributions to the field include the first algorithm for solving the problem of adversarial classification — which spurred the growth of adversarial learning as a significant branch of machine learning research and practice — with then-students Nilesh Dalvi, Mausam, Sumit Sanghai, and Deepak Verma. Working with another former student, Hoifung Poon (Ph.D., ‘11), Domingos employed Markov logic to also produce the first unsupervised approach to semantic parsing for natural language processing. 

Domingos will be formally recognized with the John McCarthy Award at IJCAI-19 in Macao, China next month.

Congratulations, Pedro!

Read more →

Remembering Bob Ritchie, one of the founders of Computer Science & Engineering at the University of Washington

Greyscale portrait of smiling man wearing glasses
Bob Ritchie

The Allen School family is mourning the recent passing of Bob Ritchie, one of our founding faculty members who helped build the foundations of Computer Science & Engineering at the University of Washington as we know it today.

Originally from Alameda, California, Ritchie earned his bachelor’s from Reed College and his Ph.D. from Princeton University — both in Mathematics — before joining the faculty of Dartmouth College in 1960. He had a near escape when, two years later, he was nearly recruited by Bowdoin College before opting to return to the west coast and join the UW instead. Ritchie started out as a professor of Mathematics, before the existence of a distinct computer science program on campus. He later joined forces with six other faculty members to push for the creation of the Computer Science Group, which was the predecessor of the Department of Computer Science. That, in turn, became the Department of Computer Science & Engineering, and ultimately, the Paul G. Allen School. 

Ritchie was among the original cadre of seven full-time faculty that comprised the early department and helped establish its reputation for excellence in theoretical computer science research and teaching. He became the second permanent department chair in 1977, succeeding interim chair Hellmut Golde after inaugural chair Jerre Noe stepped down. During his six-year term, Ritchie led the department to national prominence, earning a place among the top 10 programs in the country — and from there, we never looked back.

“Hellmut and Jerre established our culture. Bob established our external reputation,” said professor Ed Lazowska, who joined the faculty the same year Ritchie became chair. “A United Airlines ‘100,000 Miler’ plaque hung on the wall of Bob’s office — miles accrued flying around the country telling people that something exciting was happening in computer science at the University of Washington.”

Smiling man wearing plaid shirt and glasses holding glass of wine seated on plane
Ritchie logged a lot of air miles touting UW CSE nationwide before retirement

While Ritchie’s impact in raising the reputation of the program was significant, his technical excellence and commitment to mentorship are equally important aspects of his legacy. Ritchie was an early pioneer in the creation of what is now called computational complexity: the theory of the time and storage needed to solve computational problems. He was among the first to examine classically defined computations from mathematical logic and show that they are equivalent to modern complexity classes defined in terms of time and storage, calling them “predictably computable functions.” As professor emeritus Richard Ladner recalled, Ritchie was keen to help other researchers branch out into theory.

“Bob Ritchie was my first mentor when I joined the Computer Science Group in 1971. We both had backgrounds in mathematics and had moved into computer science theory,” he said. “Bob supported my transition from mathematics to theory research. He pointed out which were the important conferences to attend, and advised me which journals I should publish in. My first NSF grants were joint with Bob.

“We met on a regular basis until I got on my own feet,” Ladner continued. “I will always be grateful for his early mentoring that helped me get off to a flying start as a teacher and researcher.”

Ritchie advised two of the early Ph.D. students produced by the then-fledgling Computer Science Group — John Baker and Richard (Dick) Hamlet (Ph.D., ‘71) — in addition to supervising graduate students in Mathematics, his original home at UW. One of these students, Francine Berman (Ph.D., ’79), fondly remembers Ritchie and the program he helped to create.

“Bob was one of my thesis advisors and he was always supportive and encouraging,” recalled Berman, who, like several students in those early days in the school’s history, was officially enrolled in another academic department but advised by Computer Science faculty. “I’m sure I am among many that he helped along the way. CS at UW was a very special place in those days, as it is now, and provided a place to nurture so many of us.”

Five men standing with arms linked smiling
The way they were: Former UW CSE chairs (left to right) Bob Ritchie, Jerre Noe, Ed Lazowska, David Notkin, and Jean-Loup Baer

Ritchie also had a reputation for being a steady leader and an astute negotiator of bureaucracy. According to professor emeritus Jean-Loup Baer, who arrived at UW in the second year of the Computer Science Group’s existence and would later go on to become department chair, Ritchie would remain “cool and composed” when faced with unexpected challenges.

“While our first chair, Jerre Noe, was building a strong faculty core, he relied heavily on Bob’s knowledge and influence in his dealings with the UW administration,” Baer explained. “Almost 10 years later, when Bob asked me to be his Associate Chair, I was able to witness first-hand how his brilliant mind dealt in creative and non-obvious ways with the intricacies of large organizations.

“Bob was an important participant in the negotiations that brought the first two large collaborative projects with industry:  the creation and funding of the VLSI Consortium, and the contract with DEC for the development of the Pascal Compiler,” he continued. “I was Acting Chair while Bob was on sabbatical when we received the delivery of the VAX-11/780 as part of the Pascal Compiler deal. The only hitch was that it was a few months early, and the room where it was to be housed was not ready. I wish I had kept my email exchange with Bob — I know that mine contained the three words ‘panic, Panic, PANIC.’ Of course, Bob guided me to a solution and the VAX spent a few weeks, or months, in the campus surplus store. By watching Bob I learned that when dealing with academic administrations one has to be patient, calm, and creative and that the shortest path between point A and point B is not necessarily a straight line.”

Smiling man wearing glasses and a patterned shirt, holding a champagne coupe
Ritchie in one of his famous shirts

After completing his term as chair of CSE in 1983, Ritchie departed the UW to embark upon a second career in industry, holding positions in the research divisions of Xerox and Hewlett-Packard. But it is his influence on the emergence of UW as a powerhouse in computer science education and research that continues to be felt today.

“Bob propelled us to write the proposal that led to the first award in NSF’s Coordinated Experimental Research program, which put us on the map in computer systems. He also managed to wangle us a top-10 ranking in the 1982 National Academies assessment of doctoral programs,” noted Lazowska. “We remember him for his leadership, for his advocacy, for his friendship — and for his shirts.”

Ritchie is survived by his wife of 62 years, Audrey; daughter Lynne Gustafson; son and Allen School alumnus Scott Ritchie (B.S., ‘83); and five grandchildren: Katherine, Erin, Kelly, Callum, and Sean. We at the Allen School are keeping them in our thoughts as we celebrate Bob’s legacy to our program, our university, and our field.

Read more →

Professor Magdalena Balazinska appointed to lead the Paul G. Allen School

Woman wearing purple shirt smiling

The University of Washington today announced the appointment of Magdalena Balazinska as the next director of the Paul G. Allen School of Computer Science & Engineering. Former Dean of Engineering Michael Bragg and UW Provost Mark Richards selected Balazinska, a professor who co-leads the Allen School’s Database and Data Science research groups, to succeed current director Hank Levy, who has overseen 13 years of growth in the school’s size, stature, and impact.

“I am thrilled and deeply honored to work with students, faculty, staff and community stakeholders to advance computer science education and innovation,” Balazinska said in a UW News release. “Together, we have the opportunity to work on the most challenging problems of our time and develop groundbreaking new technology. I look forward to contributing to this goal as the new Allen School Director.”

For the past two years, Balazinska has served as director of the UW eScience Institute and, more recently, as the UW’s First Associate Vice Provost for Data Science. In that dual role, she has developed broad cross-campus partnerships to advance data-intensive discovery across a variety of disciplines and spearheaded the establishment of educational programs for students in the burgeoning field of data science. As Principal Investigator on a major IGERT grant from the National Science Foundation, Balazinska led the creation of the Data Science and Advanced Data Science Ph.D. options at UW, and later co-led the creation of the Undergraduate Data Science option. More than a dozen departments or schools on campus offer students a chance to combine their primary field of study with one or more data science options, from astronomy and oceanography, to genome sciences and psychology. Additional units are preparing to roll out the program in the future, opening up opportunities to students in a widening variety of fields to combine their major with the latest data science methods and tools. 

“Magda has been an inspired and energetic leader of campus-wide data science education, research, and adoption,” said professor Ed Lazowska, Bill & Melinda Gates Chair in the Allen School. “She succeeded me as director of UW’s eScience Institute; she became UW’s first-ever Associate Vice Provost for Data Science; she served as Principal Investigator of the nation’s first NSF grant to develop a graduate data science curriculum; and she played an integral role in designing and implementing UW’s undergraduate data science curriculum. She really understands how to partner.”

Balazinska joined the Allen School faculty in 2006, the same year she completed her Ph.D. at MIT. A co-leader of the Database and Data Science groups and an affiliate faculty member in the UW Reality Lab, Balazinska’s primary research focus is in the field of database management systems. Her current research spans data management for data science, big data systems, cloud computing, and image and video analytics. With regard to the latter, Balazinska has recently turned her attention to the development of data management techniques for augmented and virtual reality systems because, as she put it, “Why should the computer graphics and vision people have all the fun?”

Balazinska has made multiple, enduring contributions to the field during her career. For example, her work on novel techniques for increasing fault tolerance in distributed stream processing — at the time, an emerging class of data-intensive applications deployed in an increasing number of domains that included computer networking, financial services, medical information systems, and the military — earned a Test of Time Award in 2017 from the Association for Computing Machinery’s Special Interest Group on the Management of Data (ACM SIGMOD). She also received a 10-Year Most Influential Paper Award from the Working Conference on Reverse Engineering (WCRE) in 2010 for her work on advanced clone analysis for automatically refactoring software code. In addition, Balazinska’s body of work in scalable distributed data systems inspired the Very Large Data Bases (VLDB) conference — the flagship international conference on database research — to honor her with the inaugural Women in Database Research Award. In addition to her own research, Balazinska has also contributed to efforts to build up the regional database research community, co-founding the Northwest Database Society (NWDS) to bring together scholars and practitioners focused on databases and database management systems in the Pacific Northwest.

“Magda brings to this new leadership position not just a wealth of experience, but the type of vision and ability to work with people that will help keep the University of Washington and our entire tech community on a successful path,” Microsoft President Brad Smith told GeekWire.

Previously, Balazinska received an NSF CAREER Award, which supports the most promising early-career scientists and engineers. She is also a past recipient of a Microsoft Research New Faculty Fellowship; an HP Labs Innovation Research Award; two Google Faculty Research Awards; and multiple best paper awards. Before enrolling in MIT’s graduate program in Computer Science, her bachelor’s and master’s degrees in Computer Engineering and Electrical Engineering, respectively, from the Ecole Polytechnique de Montréal.

“I feel really good about how far we have come as a school and the direction we are headed for the future. We have an outstanding organization and culture with tremendous potential,” said Levy, who holds the Wissner-Slivka Chair in the Allen School. “Magda is an accomplished leader, gifted researcher, and forward-thinking educator who understands how computing can have a significant, positive impact on the university and on society. I am excited to see the school thrive with Magda at the helm.”

Balazinska will assume her new role effective January 1, 2020, subject to approval by the UW Board of Regents. Read the UW News release here, and a related story in GeekWire here.

Congratulations, Magda!

Read more →

Ph.D. alumnus Brandon Lucia receives IEEE TCCA Young Computer Architect Award

Brandon Lucia (right) accepts the 2019 Young Computer Architect Award from IEEE TCCA chair Josep Torrellas.

Allen School alumnus Brandon Lucia (Ph.D., ‘13) has been recognized by the IEEE Computer Society’s Technical Committee on Computer Architecture with the 2019 Young Computer Architect Award. This award recognizes an outstanding researcher who has completed their doctoral degree within the past six years and who has made innovative contributions to the field of computer architecture. Lucia, who completed his Ph.D. working with Allen School professor Luis Ceze and is now a faculty member in Electrical & Computer Engineering at Carnegie Mellon University, was recognized for “pioneering research in parallel debugging and intermittent computing.”

Lucia devoted his early research career to the development of novel approaches for concurrency debugging and failure avoidance for parallel and concurrent software such as shared-memory multi-threaded programs. Unlike sequential software, parallel and concurrent software relies on concurrent computations and an ordering of program events that varies with each execution. These qualities make pinpointing bugs a particularly challenging process. In response, Lucia and his collaborators developed a string of techniques that blended computer architecture and systems support to make it easier to find and fix concurrency bugs and minimize the risk of schedule-dependent failures that tend to erode system reliability. He was among the first researchers to devise mechanisms for automatically avoiding such failures ― typically the result of latent concurrency bugs discovered only after software is put into production ― without altering program semantics. His contributions include Atom-Aid, which capitalizes on the natural tendency for systems with implicit atomicity to prevent some schedule-dependent failures; ColorSafe, a scheme for applying colors to groups of data that makes it easier to avoid multi-variable atomicity violations in programs; and Aviso, a software system for automatically avoiding schedule-dependent failures by generating schedule constraints that disrupt the order of events based on historical failed executions.

Since his arrival at CMU, Lucia and his students in the Abstract Research Group have focused on the development of intermittent computing, including the design of software and hardware systems for addressing reliability issues and energy storage needs in battery-less devices. This rapidly growing area of research focuses on enabling computation, sensing, and communication with devices that harvest ambient energy to power their operations for a variety of real-world ― and out-of-this-world ― applications in potentially extreme environments. To advance this burgeoning technology, Lucia led a team of researchers in developing a new energy storage architecture, Capybara, that can be dynamically reconfigured in response to varied applications’ energy demand. Their work earned the Best Paper Award at ASPLOS 2018 and an Honorable Mention in IEEE MICRO’s Top Picks last year. Other contributions include Chinchilla, a compiler and run-time system for supporting the efficient, intermittent operation of energy-harvesting devices through adaptive dynamic checkpointing, and the Energy-Interference-Free Debugger (EDB), a tool for monitoring and debugging intermittent systems without adversely impacting their energy state that was selected as an IEEE MICRO Top Pick in 2017.

Lucia collected his latest accolade at the 46th International Symposium on Computer Architecture (ISCA 2019) this week in Phoenix, Arizona. He is not the first with an Allen School connection to earn the Young Computer Architect Award since its inception in 2011. Last year, fellow 2013 alumnus Hadi Esmaeilzadeh was recognized for his contributions to novel computer architectures in machine learning and approximate computing, and their mentor Ceze was recognized in 2013 for his work on improving multi-core programmability and correctness.

Congratulations, Brandon!

Read more →

« Newer PostsOlder Posts »