Skip to main content

Thanks to the trades who are making the Gates Center a reality!

On a typical day nearly 150 tradesmen and tradeswomen are at work on the Bill & Melinda Gates Center for Computer Science & Engineering. Every day we marvel their amazing work, and periodically we demonstrate it with a bbq lunch and special recognition for some folks who have gone above and beyond in contributing to the great culture of the team.

Today’s event was particularly special: the first event held in the atrium of the Gates Center.

Thank you Mortenson and all your subs and their people – you’re the best! And thank you LMN for an incredible design.

 

 

 

 

 

 

 

 

  Read more →

Celebrating Seattle’s sweep of this year’s major awards in computer architecture

This afternoon we celebrated an unprecedented clean sweep of the major awards at the International Symposium on Computer Architecture.

Hadi Esmaeilzadeh, UW Paul G. Allen School Ph.D. alumnus and UCSD CSE faculty member, received the Young Computer Architect Award from the IEEE Technical Committee on Computer Architecture, given annually to an individual who has completed his/her Ph.D. degree within the last 6 years and has made one or more outstanding, innovative research contributions. Hadi was honored “in recognition of outstanding contributions to novel computer architectures in emerging domains, especially in machine learning and approximate computing.”

Gabe Loh, Fellow Design Engineer with AMD Research in Seattle, received the ACM SIGARCH Maurice Wilkes Award, given annually for an outstanding contribution to computer architecture made by an individual whose computer-related professional career started no earlier than 20 years prior to the year of the award. Gabe was honored “for outstanding contributions to the advancement of die-stacked architectures.”

Susan Eggers, UW Paul G. Allen School professor emerita, received the ACM/IEEE Computer Society Eckert-Mauchly Award, the computer architecture community’s most prestigious award. Susan was recognized “for outstanding contributions to simultaneous multithreaded processor architectures and multiprocessor sharing and coherency.”

Another indication of Seattle’s emergence as a center of information technology innovation. Read more →

Yin Tat Lee and Thomas Rothvoss honored for significant contributions in mathematical optimization

Yin Tat Lee and Thomas Rothvoss

Yin Tat Lee (left) and Thomas Rothvoss

Professors Yin Tat Lee and Thomas Rothvoss of the Allen School’s Theory of Computation group were recently recognized for significant contributions to the field of mathematical optimization. Lee, who joined the University of Washington faculty last year, received the A.W. Tucker Prize recognizing the best doctoral thesis in optimization in the past three years. Rothvoss, who holds a joint appointment in the Allen School and the Department of Mathematics, earned the Delbert Ray Fulkerson Prize recognizing outstanding papers in the area of discrete mathematics.

Lee received the Tucker Prize from the Mathematical Optimization Society for his thesis, “Faster Algorithms for Convex and Combinatorial Optimization,” completed while he was a Ph.D. student at MIT. In that paper, Lee explored how combining and improving upon existing optimization techniques such as sparsification, cutting, and collapsing could yield faster algorithms for solving a variety of problems underpinning the theory and practice of computer science. His research generated a number of substantial advancements, including faster algorithms for solving important problems in linear programming, convex programming, and maximum flow. Lee’s work was significant not only for its practical contributions, but also for its philosophical; whereas researchers historically have tended to study continuous optimization and combinatorial – or discrete – optimization in isolation, Lee recognized that the two areas share some difficulties and could benefit from some of the same techniques. His results earned him MIT’s George M. Sprowls Award for the best Ph.D. thesis in computer science in 2016.

Yin Tat Lee onstage holding his award certification, with Simge Kucukyavuz and Karen Aardal

Yin Tat Lee (left) onstage with Tucker Prize Committee chair Simge Kucukyavuz (center) and Karen Aardal, chair of the Mathematical Optimization Society

Lee’s paper was the culmination of several related lines of research that yielded faster algorithms for a variety of outstanding optimization problems — and yielded Lee and his collaborators numerous conference awards. These included Best Paper at the Symposium on Discrete Algorithms (SODA 2014) for presenting a new algorithm for approximately solving maximum flow problems in near-linear time, and Best Student Paper and Best Paper at the Symposium on Foundations of Computer Science (FOCS 2014) for a new general interior point method for solving general linear programs that represented the first significant improvement in the running time of linear programming in more than two decades. Lee and his colleagues subsequently earned Best Student Paper at FOCS 2015 for devising a faster cutting plane method for solving convex problems in near-cubic time.

Since his arrival at the Allen School, Lee has continued to push the state of the art, earning a CAREER Award from the National Science Foundation to further advance his efforts to develop faster, more efficient algorithms for solving convex and other optimization problems. He recently co-authored a total of six papers accepted at the Symposium on Theory of Computing (STOC 2018) — a record number of contributions from an individual researcher to the conference in a single year that addressed an array of open problems in algorithmic convex geometry, asymptotic geometric analysis, operator theory, convex optimization, online algorithms, and probability. Since last summer, he has served as co-principal investigator for the Algorithmic Foundations of Data Science Institute (ADSI), which is developing new algorithmic tools to advance the field of data science with a $1.5 million grant from NSF.

Rothvoss was recognized with the Fulkerson Prize, which is co-sponsored by the Mathematical Optimization Society and the American Mathematical Society, for his paper “The Matching Polytope has Exponential Extension Complexity.” In that work, he set out to answer an open question that is central to combinatorial optimization related to the expression of polytopes for solving linear programs. Whereas multiple authors had established that various polytopes have exponential extension complexity for NP-hard problems, Rothvoss was interested in finding out whether the same could be said for polytopes that admit polynomial time algorithms to optimize linear functions. He established that this is, indeed, the case for the perfect matching polytope — proving that linear programming cannot be used to solve the matching problem in polynomial time. It was a significant leap forward in theoreticians’ understanding of this topic, and one which revealed a significant limitation of a technique that has been extremely popular in the field of operations research.

Unnamed individual, Thomas Rothvoss, Karen Aardal, and William Cook

Thomas Rothvoss (second from left) onstage with (left to right) Fulkerson Prize Committee member Martin Grötschel, and chair Karen Aardal and vice chair William Cook of the Mathematical Optimization Society

Rothvoss was previously recognized with a Best Paper Award at STOC 2014 for the same work, which he conducted while he was a postdoctoral researcher at MIT. That same year, he collected a Best Paper Award at SODA 2014 for his contribution to a new algorithm for solving the bin packing problem in polynomial time. He previously earned Best Paper at STOC 2010 for his work on an approximation algorithm for solving the Steiner tree problem — a particularly important problem in the field of network design. More recently, Rothvoss earned a 2015 Sloan Research Fellowship and a 2016 Packard Fellowship for his work at the intersection of mathematics and computer science to develop new techniques for finding approximate solutions to computationally hard problems.

Last year, Rothvoss earned an NSF CAREER Award for his efforts to design new and better approximation algorithms to address several outstanding problems in combinatorial optimization, including the directed Steiner tree, graph coloring, unique games, and unrelated machine scheduling problems. The goal is to make it more efficient to extract value from vast quantities of data, which will benefit not only computer science but the broader scientific community and a variety of industries. Like Lee, Rothvoss has developed a keen interest in bridging the gap between discrete and continuous optimization, inspired by the emergence of machine learning and massive datasets that have opened up new lines of inquiry at the intersection of those two historically divergent fields. To that end, he co-organized a series of workshops last fall at the Simons Institute for the Theory of Computing that brought together researchers in both communities to stimulate interaction and collaboration on areas of shared interest.

Lee and Rothvoss collected their awards at the 23rd International Symposium on Mathematical Programming (ISMP 2018) last month in Bordeaux, France. ISMP, which is held every three years, is the flagship conference for researchers working in the field of mathematical optimization.

Congratulations, Thomas and Yin Tat!

  Read more →

With ApneaApp technology from the Allen School and UW Medicine, ResMed and SleepScore Labs awaken people to the dangers of poor sleep

Left to right: Nate Watson, Rajalakshmi Nandakumar, and Shyam Gollakota

The UW team behind ApneaApp, left to right: Nate Watson, Rajalakshmi Nandakumar, and Shyam Gollakota. Photo credit: Sarah McQuate/University of Washington

More than a billion people worldwide experience problems related to sleep, which can have a significant impact on their health, productivity, and overall quality of life. In the United States alone, an estimated 25 million people suffer from obstructive sleep apnea, a disorder in which a person’s breathing is repeatedly interrupted during sleep. If left untreated, sleep apnea can cause a variety of serious health issues, including hypertension, stroke, heart disease, diabetes, mood and memory problems, and more. Now, thanks to the ApneaApp technology from the University of Washington, people around the world can better understand their sleep in order to improve their health.

One of the barriers to identifying — let alone treating — apnea and other sleep-related issues has been a lack of useful data about the quality of people’s sleep. Clinical sleep studies such as the standard polysomnography test are time-consuming and expensive, while consumer sleep trackers often require the purchase of specialized hardware and can yield inaccurate or incomplete information about a person’s condition. In an attempt to put these issues to rest, Ph.D. student Rajalakshmi Nandakumar and professor Shyam Gollakota of the Allen School’s Networks & Mobile Systems Lab teamed up with Dr. Nathaniel Watson of the UW Medicine Sleep Center to create ApneaApp, which turns a smartphone into an active sonar system using the device’s built-in microphone and speakers to effectively track changes in a person’s breathing during sleep — without requiring specialized equipment or an overnight stay in a hospital or sleep clinic.

The technology behind ApneaApp was subsequently licensed by UW CoMotion to ResMed, a global leader in sleep technology and medical devices. The company forged a joint venture with SleepScore Labs to launch a new contactless sleep tracking app, the SleepScore mobile app, earlier this summer — putting the benefits of the ApneaApp technology into the hands of consumers for the first time.

“It is extremely gratifying to bring this research from the lab to the public,” Nandakumar, who was recognized with the CoMotion Graduate Innovator Award in 2016 for her work on ApneaApp, said in a press release.

The SleepScore app measures a person’s breathing by emitting inaudible sound waves from the phone. Those sound waves are then reflected back to it based on minute changes in the subject’s chest and abdominal movements. Using algorithms and signal processing techniques developed at UW, the app gauges from these reflections the subject’s stage of sleep, time to sleep, and number of awakenings throughout the night.

Michael Wren, senior director of ResMed Sensor Technologies, credited the UW researchers along with ResMed’s Ireland-based software developers and the team at SleepScore Labs for making it easy for anyone to quantify and improve their sleep. “To see and manage a key facet of your health with just your smartphone is an incredible advancement that I hope millions take advantage of,” he said.

In addition to analyzing a person’s sleep and producing a nightly SleepScore, the free version of the SleepScore app incorporates tools to support goal-setting and personalized recommendations. A premium version of the app offers additional features, including complete sleep history, analytics, and exportable data that the user can share with their physician.

“We are excited that ResMed licensed our research into transforming the smartphone into an active sonar system,” Gollakota said. “And now, through their joint venture with SleepScore Labs, they’ve launched a product that will help enable millions of people to better understand their sleep.

Read the UW CoMotion press release here, a related SleepScore Labs press release here, and the original UW News release on ApneaApp here. Download and try the SleepScore app here, and view a recent segment of The Dr. Oz Show featuring the app here.

  Read more →

With Ford grant, Taskar Center aims to expand the power of play for children of all abilities

Volunteers adapt toys at a hackathon on the University of Washington campusThe Allen School’s Taskar Center for Accessible Technology, working in partnership with the HuskyADAPT student organization at the University of Washington and the non-profit PROVAIL Therapy Center, has won an award as part of the 2018 Ford College Community Challenge to create a lending library of adapted toys and switches for children with diverse abilities in the Pacific Northwest. In line with the competition’s theme, “Making Lives Better,” the lending library will enable families and caregivers to borrow and trial adapted toys and equipment to ensure that they meet an individual child’s needs.

HuskyADAPT — short for Accessible Design & Play Technology — is an interdisciplinary collaboration between the Taskar Center and the UW Departments of Bioengineering and Mechanical Engineering focused on developing resources and infrastructure to expand access to inclusive play technology. The program has trained hundreds of students and members of the community in toy adaptation — skills that come in handy every year at the Taskar Center’s annual holiday toy hackathon.

Through HuskyADAPT, the lending library project, and other activities, the Taskar Center is working to expand toy adaptation globally through education, research, and hands-on projects.

Ford College Community Challenge logo“Play is an important part of learning, growing, and socializing as a child, but most toys are not designed with all users in mind,” said Taskar Center director Anat Caspi. “Thanks to the generous support of the Ford Motor Company Fund, we will be able to extend the power of play to kids who are often overlooked while improving awareness of accessibility issues and community engagement for everyone.”

Learn more about the winning C3 proposal here, and view a video about the project here. Interested in supporting toy adaptation or borrowing adapted toys? Sign up to receive updates from the Taskar Center here.

Congratulations to Anat and the entire team!

Read more →

CS4HS 2018

Tom Cortina (CMU faculty and CS4HS instructor) shows the result of precisely following the teachers’ algorithm for making a PB&J sandwich!

Students from Human Centered Design and Engineering lead the teachers through a design exercise.

Last week marked the Allen School’s annual workshop for middle school and upper school teachers of math and science – CS4HS. Learn more here. And plan to join us next year! Read more →

Allen School’s new VTA accelerator enables developers to combine leading-edge deep learning with hardware co-design

Diagram of VTA stack

The VTA open-source deep learning accelerator completes the TVM stack, providing complete transparency and customizability from the user-facing framework down to the hardware on which these workloads run.

A team of Allen School researchers today unveiled the new Versatile Tensor Accelerator (VTA), an extension of the TVM framework designed to advance deep learning and hardware innovation. VTA is a generic, customizable deep-learning accelerator that researchers can use to explore hardware-software co-design techniques. Together, VTA and TVM offer an open, end-to-end hardware-software stack for deep learning that will enable researchers and practitioners to combine emerging artificial intelligence capabilities with the latest hardware architectures.

VTA represents more than a stand-alone accelerator design by incorporating drivers, a JIT runtime, and an optimizing compiler stack based on TVM. It offers users the option to modify hardware data types, memory architecture, pipelining stages, and other factors for a truly modular solution. The current version also includes a behavioral hardware simulator and can be deployed on low-cost, field-programmable gate arrays (FPGAs) for rapid prototyping. This potent combination provides a blueprint for an end-to-end, accelerator-centric deep learning system that supports experimentation, optimization, and hardware-software co-design — and enables machine learning practitioners to more easily explore novel network architectures and data representations that typically require specialized hardware support.

“VTA enables exploration of the end to end learning system design all the way down to hardware,” explained Allen School Ph.D. student Tianqi Chen. “This is a crucial step to accelerate research and engineering efforts toward future full-stack AI systems.”

The benefits of VTA can be extended across a range of domains, from hardware design, to compilers, to neural networks. The team is particularly interested to see how VTA empowers users to take advantage of the latest design techniques to fuel the next wave of innovation at the nexus of hardware and AI.

The VTA team: Tianqi Chen, Ziheng Jiang, Thierry Moreau, Luis Vega, Luis Ceze, Carlos Guestrin, Arvind Krishnamurthy

The team behind VTA, left to right, from top: Tianqi Chen, Ziheng Jiang, and Thierry Moreau; Luis Vega, Luis Ceze, and Carlos Guestrin; and Arvind Krishnamurthy.

“Hardware-software co-design is essential for future machine learning systems,” said Allen School professor Luis Ceze. “Having an open, functioning and hackable hardware-plus-software system will enable rapid testing of new ideas, which can have a lot of impact.”

In addition to Ceze and Chen, the team behind VTA includes Allen School Ph.D. students Thierry Moreau and Luis Vega, incoming Ph.D. student Ziheng Jiang, and professors Carlos Guestrin and Arvind Krishnamurthy. As was the case with the original TVM project, their approach with VTA was to engage potential users outside of the lab early and often — ensuring that they not only built a practical solution, but also cultivated a robust community of researchers and practitioners who are shaping the next frontier in computing. This community includes Xilinx, a leader in reconfigurable computing, and mobile technology giant Qualcomm.

“Xilinx Research is following TVM and VTA with great interest, which provide a good starting point for users who would like to develop their own deep learning accelerators on Xilinx FPGAs and integrate them end-to-end with a compiler toolflow,” said principal engineer Michaela Blott.

“Qualcomm is enabling Edge AI with power-efficient AI processors,” said Liang Shen, senior director of engineering at the company. “We are excited with such an open deep-learning compiler stack. It will help to establish a win-win ecosystem by enabling AI innovators to easily deploy their killer applications onto any AI-capable device efficiently.”

“We’re excited to see the start of an open-source, deep learning hardware community that places software support front and center,” Moreau said, “and we look forward to seeing what our users around the world will build with VTA and TVM.”

To learn more about VTA, read the team’s blog post here and technical paper here, and visit the Github repository here. Read more about team’s previous work on the TVM framework here and the related NNVM compiler here.

  Read more →

Allen School strengthens its leadership in AI with the arrival of Hannaneh Hajishirzi

Hannaneh HajishirziThe Allen School is thrilled to officially welcome professor Hannaneh Hajishirzi, whose research and teaching spans artificial intelligence, natural language processing, and machine learning, to the full-time faculty. Many members of the Allen School community will be familiar with Hajishirzi and her work from her time as a research professor in Electrical Engineering and an adjunct professor in Computer Science & Engineering at the University of Washington.

The goal of Hajishirzi’s research is the development of robust, scalable systems that can understand, interpret, and reason about data drawn from multiple sources. To that end, she designs algorithms for question answering, semantic understanding, and information extraction from textual and visual data, including news articles, web data, technical documents, and conversations. By leveraging two complementary developments in the field of AI — symbolic representation and end-to-end deep learning neural models — Hajishirzi aims to dramatically improve output while reducing latency and computational costs for a range of applications, including search engines, education, media, and financial services.

Working with colleagues at UW and the Allen Institute for Artificial Intelligence, Hajishirzi led the development of GeoS, the first automated system capable of solving geometry word problems that uses symbolic representations to understand and interpret natural language text and corresponding diagrams. She also led the development of Bi-Directional Attention Flow for Machine Comprehension (BiDAF) in collaboration with AI2 and UW researchers. BiDAF is a novel end-to-end neural question-answering system for textual paragraphs and diagrams that outperformed all previous QA systems tested on the Stanford Question Answering Dataset. Other projects include the development of systems that can automatically solve and generate algebra word problems; Skim-RNN, an efficient recurrent neural network that determines the importance of input tokens to downstream tasks and “skims” those that are unimportant, similar in concept to human speed-reading; Query Reduction Network (QRN), a variant on RNNs that is capable of multi-hop reasoning; and a new data-driven approach for extracting knowledge about life events using online photo albums — to name only a few.

Hajishirzi first arrived at UW as a research scientist in 2012. Since joining the faculty three years ago, she has advised several Allen School Ph.D. students and has taught or guest-lectured in a variety of introductory and advanced courses in artificial intelligence, statistical machine learning, natural language processing, digital signal processing, and grounded language acquisition and vision.

Hajishirzi previously held postdoctoral research positions at Carnegie Mellon University and Disney Research. She has earned numerous awards for her research, including an Allen Distinguished Investigator Award, a Google Faculty Research Award, a Bloomberg Data Science Award, an Amazon Research Award, and a SIGDIAL Best Paper Award. Hajishirzi earned a bachelor’s degree in Computer Engineering from Sharif University of Technology and a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign.

Welcome, Hanna!

  Read more →

Allen School invites K-12 teachers to explore computer science at CS4HS

A group of teachers try a CS Unplugged activity at CS4HS

The Allen School is gearing up for its annual CS4HS workshop for K-12 teachers taking place July 16 – 18 on the University of Washington campus in Seattle. CS4HS offers educators from across Washington an opportunity to explore computer science — no prior programming experience required — along with tips for incorporating CS principles into their classroom teaching. While the curriculum originally was designed with math and science teachers in mind, the Allen School welcomes teachers of all subjects who are interested in learning how to use computer science to enhance student learning.

Workshop participants will become familiar with simple concepts and activities that can be adapted to suit many different course subjects and grade levels. These include computational thinking, user-centered design, and basic coding using popular visual programming languages such as Scratch and Processing. Presenters will also share useful information about academic and career pathways that teachers can take back to their classrooms. Upon completion of CS4HS, participants will have access to free tools to help nurture students’ creativity while exposing them to the fundamentals of this exciting subject.

CS4HS is a joint undertaking of UW, Carnegie Mellon University, and CS Unplugged. Since 2007, more than 600 teachers from around the state have participated in the UW-hosted workshop. Past participants have delivered rave reviews about the content, the speakers — even the food — that they encountered over the course of the three days.

“I attended the CS4HS summer workshop at the UW and was absolutely thrilled by the quality of the presentations, the breadth of material we covered and the quantity of classroom-ready material we received,” said CS4HS alumnus Judson Miller, a teacher at Roosevelt High School in Seattle. “As a high school math teacher, I was curious and excited to learn how this workshop on computer science might prove useful in my class. In the end, I can say absolutely that this workshop changed my teaching.”

There is a non-refundable registration fee of $50 for each participating teacher. Eligible teachers earn 20 clock hours of professional development credit from the Washington Science Teachers Association at no cost to them. For out-of-town participants, free accommodation in UW campus housing is available with a refundable security deposit.

To learn more and to join us at this year’s workshop, visit the CS4HS website.

  Read more →

Yejin Choi recognized with Borg Early Career Award

Yejin ChoiYejin Choi, a professor in the Allen School’s Natural Language Processing research group, has earned a 2018 Borg Early Career Award from the Computing Research Association’s Committee on the Status of Women in Computing Research (CRA-W). The annual award, which is named in honor of pioneering computer scientist Anita Borg, recognizes women in computing who have made significant contributions to the field through their research and activities that promote diversity.

When Choi joined the Allen School faculty in 2014, she had already established herself as a rising star at the intersection of NLP and computer vision — work for which she shared the 2013 David Marr Prize, one of the most prestigious awards bestowed upon researchers in the computer vision community. Since then, Choi has directed her research toward developing machines’ ability to uncover meaning beyond that which is explicitly expressed in language — to “read between the lines,” so to speak — to enable more intelligent, intuitive communication and to promote artificial intelligence for social good.

Choi is particularly interested in language learning grounded within physical and social contexts to help machines develop common-sense understanding of how the world works and to identify and address implicit bias. To that end, Choi and her collaborators have developed connotation frames for understanding power and agency that they used to uncover bias in modern film scripts; a technique for extracting inferred knowledge about objects and their related actions that humans tend to accept as a given; and an analysis of linguistic patterns associated with online news to distinguish between true reporting, satire, and fake news — to name only a few.

Last year, Choi served as a faculty advisor to the University of Washington team that created a socialbot that captured first place in the inaugural Amazon Alexa Prize competition. She previously earned a place among “AI’s 10 to Watch,” a celebration of early-career researchers who are already making an impact on the field of AI that is compiled by IEEE Intelligent Systems. Choi currently splits her time between the University of Washington and the Allen Institute for Artificial Intelligence (AI2), where she oversees Project Alexandria to build a common-sense AI.

While her research has yielded advances in the detection and correction of language that targets underrepresented groups, Choi’s impact on diversity extends well beyond the lab. She has served as a faculty advisor for the UW College of Engineering’s STARS program, which helps aspiring engineering and computer science majors from underserved communities to successfully navigate the transition to more rigorous college-level coursework. As a member of the executive board of the Association of Computational Linguistics, Choi worked with a special committee aimed at advancing equity and diversity within the ACL community.

In addition to recognizing Choi, the CRA honored University of Michigan professor Reetuparna Das with a 2018 Borg Early Career Award for her service to Women in Computing Architecture (WiCArch) and her many activities to encourage high-school girls and freshmen women in college to explore computing.

Previous Borg Early Career Award recipients with an Allen School connection include alumnae Martha Kim (Ph.D., ’08), a faculty member at Columbia University; A.J. Bernheim Brush (Ph.D., ’02), Principal Program Manager at Microsoft; and Gail Murphy (Ph.D., ’96), a faculty member at the University of British Columbia.

Read the award citation here.

Congratulations, Yejin!

  Read more →

« Newer PostsOlder Posts »