Skip to main content

UW’s Shwetak Patel, Matt Reynolds, and Julie Kientz earn Ubicomp 10-Year Impact Award

Abowd, Kientz, Patel, Kay

From left: Gregory Abowd, Julie Kientz, Shwetak Patel, and Award Chair Judy Kay. Not pictured: Matthew Reynolds and Thomas Robertson.

University of Washington professors Shwetak Patel, Matt Reynolds, and Julie Kientz have been recognized with the 10-Year Impact Award at Ubicomp 2017 for the paper, “At the Flick of a Switch: Detecting and Classifying Unique Electrical Events on the Residential Power Line.” The paper, which originally earned the Best Paper Award and Best Presentation Award at Ubicomp 2007, was singled out by this year’s conference organizers for having lasting impact a decade after its original presentation.

Patel and Reynolds hold joint appointments in the Allen School and Department of Electrical Engineering. Kientz is a faculty member in the Department of Human Centered Design & Engineering with an adjunct appointment in the Allen School. Patel and Kientz were Ph.D. students and Reynolds was a senior research scientist at Georgia Tech when they co-authored the original paper with research scientist Thomas Robertson and professor Gregory Abowd.

The paper presents a novel approach for detecting energy activity within the home using a single plug-in sensor. The researchers applied machine learning techniques to enable the system to accurately differentiate between different electrical events, such as turning on a specific light switch or operating certain appliances. This work paved the way for a new field of research in high-frequency energy disaggregation and infrastructure mediated sensing. It also led to the creation of Zensi, a startup spun out of Georgia Tech and UW that was acquired by Belkin in 2010. Many other companies focused on home energy monitoring and automation have been formed based on the techniques first described in the winning paper.

Matt Reynolds

Matt Reynolds

This is the fifth year in a row that UW and Allen School researchers have been recognized at Ubicomp for the enduring influence of their contributions:

2016: The late professor Gaetano Borriello, UW EE Ph.D. alumnus Jonathan Lester, and collaborator Tanzeem Choudhury were recognized for their 2006 paper, “A Practical Approach to Recognizing Physical Activities.”

2015: A team that included Borriello, Ph.D. alumni Anthony LaMarca and Jeff Hightower, and Bachelor’s alumni James Howard, Jeff Hughes, and Fred Potter won for their 2005 paper, “Place Lab: Device Positioning Using Radio Beacons in the Wild.”

2014: Borriello and Hightower won for their 2004 paper, “Particle Filters for Location Estimation in Ubiquitous Computing: A Case Study.”

2013: Ph.D. alumni Don Patterson and Lin Liao, professor Dieter Fox, and then-professor Henry Kautz were recognized for their 2003 paper, “Inferring High-Level Behavior from Low-Level Sensors.”

Way to go, team!

September 14, 2017

UW researchers achieve breakthrough in ubiquitous connectivity with long-range backscatter

Long-range backscatter equipment

A team of researchers at the Allen School and University of Washington Department of Electrical Engineering have invented a long-range backscatter system that enables low-cost connectivity for a variety of objects and devices while consuming 1000x less power than existing technologies. The new system — which builds upon pioneering work by members of UW’s Networks & Mobile Systems Lab and Sensor Systems Lab on techniques for harvesting power from ambient signals — is the first of its kind capable of achieving the distances required for wide-area communication and truly ubiquitous connectivity.

Allen School professor Shyam Gollakota, head of the Networks & Mobile Systems Lab, believes the system will be a game-changer for many different industries. “Until now, devices that can communicate over long distances have consumed a lot of power,” Gollakota explained in a UW News release. “The tradeoff in a low-power device that consumes microwatts of power is that its communication range is short.”

“Now,” he said, “we’ve shown that we can offer both.”

Long-range backscatter works by reflecting radio frequency (RF) signals onto sensors that, in turn, synthesize and transmit data packets that are decoded by a receiver. To expedite adoption, Gollakota and his colleagues — Allen School and EE professor Joshua Smith, director of the Sensor Systems Lab; former Allen School postdoctoral researcher and EE Ph.D. alumnus Vamsi Talla; Allen School Ph.D. student Mehrdad Hessar; former EE Ph.D. student Bryce Kellogg; and current EE Ph.D. student Ali Najafi — took into account form factor and cost when designing the system. The efficient and unobtrusive sensors, which are capable of drawing what little power they need from ultra-thin printed batteries or from ambient sources, cost as little as 10 cents apiece. The system also relies on readily available commodity hardware for the receiver, rather than a custom design.

Bryce Kellogg, Vamsi Talla, Mehrdad Hessar

Team members (from left) Bryce Kellogg, Vamsi Talla, and Mehrdad Hessar

The researchers had to overcome some technical challenges. To ensure the receiver could distinguish the backscattered reflections from the original signal and other noise, they introduced the first backscatter design employing chirp spread spectrum (CSS) modulation. This approach enables the reflected signals to be spread across multiple frequencies for greater sensitivity over long distances. They also developed the first backscatter harmonic cancellation mechanism to cancel out sideband interference, and a link-layer protocol to enable multiple long-range backscatter devices to share the spectrum. The researchers are presenting their research paper at the Ubicomp 2017 conference this week in Maui, Hawaii.

The team envisions a number of ways in which the technology could be used, from precision agriculture and smart cities, to medical monitoring and whole-home sensing. To demonstrate the system’s reliability, the researchers deployed long-range backscatter in a variety of real-world settings. These included a 4,800 square-foot multi-story house; a 13,000 square-foot multi-room office space containing wood, concrete, and metal; and a working vegetable farm spread over one acre. They also built prototypes of a smart contact lens and flexible epidermal patch to illustrate how long-range backscatter will enable new capabilities in wearable technology.

The future of ubiquitous connectivity envisioned by the team may not be far off. Jeeva Wireless, the UW spinout co-founded by the researchers to commercialize backscatter-related research, aims to bring the long-range backscatter system to market within the next six months.

“People have been talking about embedding connectivity into everyday objects such as laundry detergent, paper towels and coffee cups for years,” said Talla, who co-founded the company and serves as its CTO. “This is the first wireless system that can inject connectivity into any device with very minimal cost.”

To learn more about long-range backscatter, read the UW News release here and visit the project page here. Also check out coverage by The Economist, GeekWire, and Engadget.

Photos: Dennis Wise/University of Washington

September 13, 2017

Jeffrey Heer wins IEEE Visualization Technical Achievement Award

Jeffrey HeerAllen School professor Jeffrey Heer, who leads the Interactive Data Lab at the University of Washington, has been recognized the 2017 IEEE Visualization Technical Achievement Award from the Institute of Electrical & Electronics Engineers. The IEEE Visualization and Graphics Technical Committee (VGTC) selected Heer based on his contributions to the “design, development, dissemination and popularization of languages for visualization.” Winners of the Visualization Technical Achievement Award are nominated by their peers.

The IEEE VGTC award citation recounts how Heer’s interest in visualization began, as an undergraduate student in electrical engineering and computer science at University of California, Berkeley. It was then that he first encountered the Hyperbolic Tree, a visualization technique developed at Xerox PARC. Its mathematical elegance and the ease with which it enabled one to move through massive hierarchies first drew Heer to visualization research. His subsequent frustration at being unable to rapidly prototype new designs inspired him to focus on higher-level abstractions for expressing visualization and interaction techniques.

Such techniques would be used by Heer and his collaborators to develop a series of robust tools for producing interactive visualizations on the web. These include Prefuse — one of the first software frameworks for information visualization he developed as a graduate student working with Stu Card at Xerox PARC and James Landay, a faculty member at what was then the Department of Computer Science & Engineering at the University of Washington — and Flare, a version of Prefuse built for Adobe Flash that was partly informed by Heer’s work on animated transitions with George Robertson at Microsoft Research.

As a faculty member at Stanford University, Heer worked with Mike Bostock on Protovis, a graphical toolkit for visualization that married the efficiency of high-level visualization systems and the expressiveness and accessibility of low-level graphical systems, and Data-Driven Documents (D3), which succeeded Protovis as the de facto standard for interactive visualizations on the web. Heer also contributed to Data Wrangler, an interactive tool for cleaning and transforming raw data that was developed by researchers at Stanford and Berkeley. Heer and colleagues co-founded a startup company, Trifacta, to commercialize that work.

Since joining the Allen School faculty in 2013, Heer has worked with a team of graduate students in the Interactive Data Lab on a suite of complementary tools for data analysis and visualization design built on Vega, a declarative language for producing interactive visualizations. These tools include Lyra, an interactive environment for generating customized visualizations; Voyager, a recommendation-powered visualization browser; and Vega-Lite, a high-level grammar of interactive graphics.

Heer will be formally recognized at the IEEE VIS conference next month in Phoenix, Arizona. The award marks the second time this year that Heer has been singled out for his technical contributions. In May, the Association for Computing Machinery honored Heer with the prestigious Grace Murray Hopper Award for his pioneering work on visualization tools that have transformed how people interact with data.

Read the full IEEE VGTC citation celebrating Heer’s contributions here.

Congratulations, Jeff!

September 12, 2017

Securing the Fourth Estate: What the Panama Papers and Confidante reveal about journalists’ needs and practices

Reporters with laptops sitting around boardroom table

Reporters contributing to the Panama Papers investigation meet in Munich, Germany to receive training on ICIJ’s research tools. Photo credit: Kristof Clerix

When the Panama Papers story first broke in April 2016, its explosive revelations of a vast and hidden network of offshore shell companies and financial scandals-in-waiting tied to politicians, corporations, banking institutions, and organized crime represented a victory for good, old-fashioned investigative journalism — with a high tech twist. In addition to provoking international outrage, toppling governments, and instigating audits and investigations in more than 70 countries, the story caught the eye of researchers like Allen School professor Franziska Roesner, who — working with a team of researchers from the University of Washington’s Security and Privacy Research Lab and collaborators at Columbia University and Clemson University — has made a study of the security practices of journalists and developed new solutions tailored to the needs of the Fourth Estate.

While the users of secure systems can notoriously be the weakest link, what Roesner and colleagues found in examining the successful Panama Papers investigation was that the users — in this case, the more than 300 reporters spread across six continents working under the auspices of the International Consortium of Investigative Journalists — were, in fact, a source of strength.

“Success stories in computer security are rare,” noted Roesner. “But we discovered that the journalists involved in the Panama Papers project seem to have achieved their security goals.”

The researchers set out to determine how hundreds of journalists with varying degrees of technical acumen were able to securely collaborate on the year-long investigation, which involved 11.5 million leaked documents from Panama-based law firm Mossack Fonseca that implicated individuals and entities at the highest reaches of power. They relied on a combination of survey data from 118 journalists who participated in the investigation, and in-depth, semi-structured interviews with those who designed and implemented the security systems that facilitated global collaboration while protecting those doing the collaborating. The team presented its findings in their paper, “When the Weakest Link Is Strong: Secure Collaboration in the Case of the Panama Papers,” as part of the 26th USENIX Security Symposium in Vancouver, Canada last month.

Franziska Roesner

Allen School professor Franziska Roesner has made a study of journalists’ security needs and practices

Roesner and her colleagues were surprised to discover the extent to which ICIJ was able to strictly and consistently enforce security requirements such as PGP and two-factor authentication — even among those for whom such tools and practices were new. One of the main reasons the operation was a success, the researchers found, came down to utility.

“We found that the tools developed for the project were highly useful and usable, which motivated journalists to use the secure communication platforms provided by the ICIJ,” explained Susan McGregor, a professor at Columbia Journalism School and a principal investigator, along with Kelly Caine of Clemson University’s School of Computing, on the study.

They also found that journalists were motivated by more than sheer usefulness: their sense of community, and responsibility to that community, spurred them to not only tolerate but to embrace the strict security requirements put in place.

“The project leaders frequently communicated the importance of security and mutual trust,” Roesner noted. “This cultivated a strong sense of shared responsibility for the security of not only themselves, but of their colleagues — they were all in this together, and that was a powerful factor in the success of the operation, from a security standpoint.”

It also helped that the ICIJ walked their talk: if a journalist did not have access to a cellphone that could serve as a second factor, the organization purchased and configured one for them. They also made PGP a default tool and ensured everyone had a PGP key, thus taking the guesswork out of evaluating and selecting appropriate tools for themselves.

ICIJ’s approach helped it to avoid a number of known pitfalls when it comes to journalists’ security. Earlier work by Roesner and her collaborators that examined the security and privacy needs and constraints of journalists as well as those of the media organizations that employ them revealed the inadequacy of current tools, which often impede the gathering of information. The researchers found that this often led journalists to create ad-hoc workarounds that may compromise their own security and the security of their sources.

Armed with the lessons learned from those previous studies, Roesner teamed up with Allen School Ph.D. students Ada Lerner (now a faculty member at Wellesley College) and Eric Zeng, and undergraduate student Mitali Palekar to develop Confidante, a usable encrypted email client for journalists and others who require secure electronic communication that aims to improve on traditional PGP tools like those used in the Panama Papers investigation.

“We built Confidante to explore how we could combine strong security with ease of use and minimal configuration. One of our goals was for it to feel, as much as possible, like using regular email,” explained Lerner.

Ada Lerner, Mitali Palekar, Eric Zeng, and Confidante logo

Confidante team members, clockwise from top left: Ada Lerner, Mitali Palekar, and Eric Zeng

“Building it allowed us to get really specific with journalists in our user study, since it was a prototype they could try out and react to — and that allowed us to ask them about the ways in which it did and didn’t meet their needs,” she continued. “It let us more concretely understand what kind of system might be able to provide journalists with strong protections, including reducing user errors that might inadvertently compromise their security.”

Confidante is built on top of Gmail to send and receive messages and Keybase for automatic public/private key management. In a study of a working prototype involving journalists and lawyers, the team found that Confidante enabled users to complete an encrypted email task more quickly, and with fewer errors, compared to an existing email encryption tool. Compatibility with mobile was another factor that met with users’ approval.

“Every journalist and lawyer involved in our user study regularly reads and responds to email on the go, so any encrypted email solution developed for this group must work on mobile devices,” noted Zeng. “As a standalone email app built with modern web technologies, Confidante meets this need, whereas integrated PGP tools like browser extensions do not.”

Some participants observed that using Confidante, with its automated key management, was not that different from sending regular email — suggesting that Roesner and her colleagues had hit the mark when it comes to balancing user preferences and strong security.

“Tools fail in part when the technical community has built the wrong thing, so it’s important for us as computer security researchers to understand user needs and constraints,” observed Roesner. “What the Panama Papers study and Confidante illustrate is that there are ways to help journalists to do their jobs securely as well as effectively — and this is important not just for these individuals and their sources, but for society at large.”

Read the USENIX Security paper to learn more about computer security and the Panama Papers. Visit the Confidante website to try out the prototype and view the publicly available source code from the Allen School research team.

September 11, 2017

UW researchers are working on a way to screen for concussion using a smartphone

Dr. Lynn McGrath and Alex Mariakakis demonstrate PupilScreenAccording to the U.S. Centers for Disease Control, an estimated 3.8 million sports-related concussions occur in this country alone — and roughly half of those go undiagnosed. Researchers in the Allen School’s UbiComp Lab and UW Medicine hope to reduce that risk and make sports safer by developing PupilScreen, a tool for measuring whether someone has suffered concussion by means of a smartphone app.

Currently, coaches and parents rely on subjective assessments like asking athletes questions or having them demonstrate their balance to determine whether they may be suffering from concussion. In cases of severe traumatic brain injury, physicians may use a penlight test — or, more rarely, an expensive pupillometer device — to measure a patient’s pupillary light reflex (PLR). Even with these tests, physicians rely on a process of elimination to rule out the most severe indicators of head trauma and arrive at a diagnosis of concussion. Inspired by research indicating that subtle changes in the PLR can point to concussion, the UW team aimed to create an inexpensive and easy-to-use tool for anyone, anywhere to gather the object data needed to make the right call when it comes to an athlete’s health.

“Right now the best screening protocols we have are still subjective,” professor Shwetak Patel, who holds a joint appointment in the Allen School and UW Department of Electrical Engineering, told UW News. “A player who really wants to get back on the field can find ways to game the system.”

PupilScreen would eliminate this element of uncertainty by providing an objective way to assess an individual for brain injury. It uses the smartphone camera flash to stimulate the pupillary response, then records a brief video of the pupil changing diameter. The video is processed using convolutional neural networks to measure changes in the pupil’s diameter and identify clinically relevant deviations from the normal pupillary response. In a small pilot study involving a combination of people with and without traumatic brain injury, clinicians successfully diagnosed cases of injury with near-perfect accuracy using PupilScreen. Although they relied on a 3-D printed box to control the amount of light to which subjects’ eyes were exposed during the study, the researchers are working on an app that can be used without accessories.

“The vision we’re shooting for is having someone simply hold the phone up and use the flash,” said Allen School Ph.D. student and lead author Alex Mariakakis. “We want every parent, coach, caregiver or EMT to who is concerned about a brain injury to use it on the spot without needing extra hardware.”

The CDC has found that individuals who suffer concussion are six times more likely to suffer a future head injury. With a tool like PupilScreen at their disposal, coaches and clinicians would be able to obtain an objective reading of an athlete’s condition before risking his/her return to the field of play. According to Dr. Lynn McGrath, co-author and resident physician in the Department of Neurological Surgery at UW Medicine, PupilScreen would fill an important gap in concussion screening and treatment.

“After further testing, we think this device will empower everyone from Little League coaches to NFL doctors to emergency department physicians to rapidly detect and triage head injury,” he said.

The team developing PupilScreen includes Jacob Baudin, UW medical and doctoral student in physiology and biophysics; Allen School Ph.D. student Eric Whitmire and undergraduates Vardhman Mehta and Megan Banks; and Dr. Anthony Law, resident physician in the UW Medicine Department of Otalryngology – Head and Neck Surgery. The researchers will present PupilScreen at the UbiComp 2017 conference in Maui, Hawaii next week.

The next step will be to enlist the help of coaches and medical providers in field testing PupilScreen and gathering data to refine the tool by identifying which pupillary responses are most helpful for measuring ambiguous cases of concussion. The team plans to begin further testing next month, and aims to make a PupilScreen app commercially available within the next two years.

“Having an objective measure that a coach or parent or anyone on the sidelines of a game could use to screen for concussion would truly be a game-changer,” Patel said.

Read the UW News release here, and visit the PupilScreen project page here. Read the team’s research paper here.

September 6, 2017

UW’s Sounding Board named a finalist for $2.5 million Amazon Alexa Prize

Sounding Board team

Left to right: Noah Smith, Maarten Sap, Ari Holtzman, Mari Ostendorf, Elizabeth Clark, Hao Fang, Yejin Choi, and Hao Cheng

A team of researchers from the Allen School and University of Washington Department of Electrical Engineering has been named a finalist for Amazon’s inaugural Alexa Prize. The UW team, which is the only North American competitor to make it to the next round, earned one of only three available places for Sounding Board, a conversational agent developed to engage users in thoughtful, informative discussion and transform how people interact with everyday devices in their homes.

The Alexa Prize is a $2.5 million university competition designed to encourage the development of conversational artificial intelligence (AI). The goal is to produce a socialbot — an AI capable of coherent conversation with humans — that is able to converse about popular topics and current events for 20 minutes. Teams built their socialbots using the Alexa Skills Kit and receive continuous, real-world feedback from millions of Amazon customers that have interacted with them anonymously through Alexa.

Sounding Board combines expertise in deep learning, reinforcement learning, language generation, psychologically-infused natural language processing, and human-AI collaboration. The team is led by EE Ph.D. student Hao Fang, working with EE professor and lead advisor Mari Ostendorf, with contributions from EE Ph.D. student Hao Cheng, Allen School Ph.D. students Elizabeth Clark, Ari Holtzman, and Maarten Sap, and professors Yejin Choi and Noah Smith of the Allen School’s Natural Language Processing research group.

More than 100 teams from universities in 22 countries applied to be part of the inaugural competition. The finalists were selected from among 12 semifinalists whose socialbots were evaluated based on customer ratings of their interactions during hundreds of thousands of conversations between July 1 and August 15. In a blog post, Amazon’s Ashwin Ram, senior manager for Alexa AI, announced that UW’s Sounding Board team and the Alquist team from the Czech Technical University in Prague received the highest average customer ratings, earning them a place in the finals. What’s up Bot from Heriot-Watt University in Edinburgh, Scotland earned the wildcard slot.

The winner of the Alex Prize will be announced at AWS re:Invent 2017 in November in Las Vegas, Nevada. Amazon will publish participating teams’ technical papers in the Alexa Prize Proceedings as a way of sharing their work with the broader research community.

Read more about the finalists in GeekWire and Xconomy, and contribute to the development of conversational AI by saying “Alexa, let’s chat” to interact with the finalists’ socialbots.

Go team!

September 1, 2017

Snap a selfie, screen for pancreatic cancer with new app from UW researchers

Alex Mariakakis demonstrates BiliScreen app using 3-D printed boxThey say that the eyes are a window to the soul; with a new smartphone app developed by researchers at the University of Washington, they are now also a window to one’s health. Members of the Allen School’s UbiComp Lab, working with clinicians at UW Medicine, developed BiliScreen to enable anyone to snap a selfie and get screened for pancreatic cancer. The app detects adult jaundice — an early symptom of the disease — to enable more timely intervention and better outcomes for people at risk of developing one of the deadliest forms of cancer.

Jaundice is a yellowing of the skin and eyes produced when excess bilirubin is present in the bloodstream. The condition can be an indicator of a variety of diseases, including pancreatic cancer and hepatitis. In the case of the former, by the time a patient’s jaundice becomes visible, the condition often has advanced past the point of successful treatment.

“The eyes are an interesting gateway to the body,” said Shwetak Patel, director of the UbiComp Lab and a professor in the Allen School and UW Department of Electrical Engineering, in a UW News release. “Our question was: Could we capture some of these changes that might lead to earlier detection with a selfie?”

It turns out that they could by combining a smartphone’s camera, computer vision, and machine learning. Using a selfie of someone’s eyes, BiliScreen applies computer vision algorithms to the image to isolate the sclera, or white part of the eye, for analysis. It then calculates color information from the pixels of the sclera, discarding blood vessels and eyelashes in the process, and uses machine learning to correlate the color descriptor with bilirubin levels to determine whether that person has jaundice.

The research team measured BiliScreen’s effectiveness in a clinical study involving 70 people, with the help of a 3-D printed box — similar to a Google Cardboard headset — to control for variations in lighting conditions. BiliScreen correctly identified jaundice nearly 90 percent of the time compared to the standard, more invasive blood test. The app can also be used with a second accessory, paper glasses printed with colored squares to calibrate for differences in ambient lighting. The ultimate goal, however, is to remove the need for any accessories so that people can use the app anytime, anywhere, to instantly screen for a disease that has a five-year survival rate of just nine percent.

“By the time you’re symptomatic, it’s frequently too late,” noted Allen School Ph.D. student Alex Mariakakis, lead author of the research paper. “The hope is that if people can do this simple test once a month — in the privacy of their own homes — some might catch the disease early enough.”

Patel and Mariakakis developed BIliScreen working with UW Medicine professors James Taylor and Lei Yu, Allen School undergraduate Megan Banks, and research study coordinator Lauren Phillipi. The project builds on an earlier collaboration between the UbiComp Lab and UW Medicine that produced BiliCam, an app that is effective in screening for newborn jaundice that is the subject of a recent article published in the journal Pediatrics.

BiliScreen will be presented at the UbiComp 2017 conference next month in Maui, Hawaii.

Read the UW News release here, the research paper here, and visit the BiliScreen project page here. Check out coverage of BiliScreen by IEEE Spectrum, GeekWire, PBS News Hour, The Independent, and Vice News.

Photo credit: Dennis Wise/University of Washington

August 28, 2017

UW’s Sham Kakade and Maryam Fazel earn NSF TRIPODS Award to advance the state of the art in data science

Sham Kakade and Maryam FazelA team of University of Washington researchers co-led by Sham Kakade, a professor in the Allen School and Department of Statistics, and Electrical Engineering Professor Maryam Fazel have secured a $1.5 million award from the National Science Foundation (NSF) to develop new algorithmic tools that will advance the state of the art in data science. The funding will support the researchers’ project titled “Algorithms for Data Science: Complexity, Scalability, and Robustness” as part of the agency’s Transdisciplinary Research in Principles of Data Science (TRIPODS) program.

TRIPODS was designed to engage members of the theoretical computer science, mathematics and statistics communities in developing the theoretical foundations of data science to promote data-driven discovery. The UW proposal aims to produce a common language and set of design principles to guide the development of new algorithmic tools that will automate the process of extracting robust insights from vast troves of data.

“Modern data science challenges transcend the ideas of any single discipline, which is what makes this work so exciting,” Kakade said. “With the growing availability of large datasets and increasing computational resources, we need more robust algorithmic tools to address contemporary data science challenges — and we believe a unifying approach is needed to overcome those challenges, accelerate the pace of scientific discovery and generate solutions to real-world problems.”

In addition to developing the language for data-driven discovery, the researchers intend to train students and postdoctoral scholars to be well-versed in the disciplines that underpin data science and incorporate theoretical ideas into a data science curriculum.

“Our project is unique in that it places mathematical optimization theory at the heart of this endeavor, bridging across computer science, mathematics, and statistics,” said Fazel, who holds adjunct appointments in the Allen School, Statistics, and the Department of Mathematics. “The project covers both high-impact interdisciplinary research and institutional activities such as workshops and boot camps to train students with novel techniques from all three disciplines. Ultimately, the project could serve as a springboard for building a full-fledged NSF institute in Phase II of the program.”

Left to right: Dmitriy Drusvyatskiy, Zaid Harchaoui, and Yin Tat Lee

Fazel’s research spans optimization, machine learning, signal processing, and system identification. Kakade, who is the Washington Research Foundation Data Science Chair at UW and an adjunct faculty member in Electrical Engineering, focuses on theoretical and applied questions in machine learning and artificial intelligence. They are joined on the project by three co-principal investigators: Mathematics Professor Dmitriy Drusvyatskiy, Statistics Professor Zaid Harchaoui, and Allen School Professor Yin Tat Lee.

The project is one of 12 proposals that received TRIPODS grants. The awards represent the NSF’s first major investment in “Harnessing the Data Revolution,” one of 10 “big ideas” the agency has identified as critical for future investment.

“These new TRIPODS projects will help build the theoretical foundations of data science that will enable continued data-driven discovery and breakthroughs across all fields of science and engineering,” said Jim Kurose, assistant director for Computer and Information Science and Engineering (CISE) at NSF, in a press release.

Read the NSF announcement here, and the UW team’s abstract here.

August 24, 2017

Allen School’s open-source TVM framework bridges the gap between deep learning and hardware innovation

Illustration of the gap between deep learning frameworks and different types of hardwareDeep learning has become increasingly indispensable for a broad range of applications, including machine translation, speech and facial recognition, drug discovery, and social media filtering. This growing reliance on deep learning has been fueled by a combination of increased computational power, decreased data storage costs, and the emergence of scalable deep learning systems like TensorFlow, MXNet, Caffe and PyTorch that enable companies and organizations to analyze and extract value from vast amounts of data with the help of neural networks.

But existing systems have limitations that hinder their deployment across a range of devices. Because they are built to be optimized for a narrow range of hardware platforms, such as server-class GPUs, it takes considerable engineering effort and expense to adapt them for other platforms — not to mention provide ongoing support. The Allen School’s novel TVM framework aims to bridge that gap between deep learning systems, which are optimized for productivity, and the multitude of programming, performance and efficiency constraints enforced by different types of hardware.

With TVM, researchers and practitioners in industry and academia  will be able to quickly and easily deploy deep learning applications on a wide range of systems, including mobile phones, embedded devices, and low-power specialized chips — and do so without sacrificing battery power or speed.

“TVM acts as a common layer between the neural network and hardware back end, eliminating the need to build a separate infrastructure optimized for each class of device or server,” explained project lead Tianqi Chen, an Allen School Ph.D. student who focuses on machine learning and systems. “Our framework allows developers to quickly and easily deploy and optimize deep learning systems on a multitude of hardware devices.”

Portraits of researchers who developed the TVM frameworkTVM was developed by a team of researchers with expertise in machine learning, systems and computer architecture. In addition to Chen, the team includes Allen School Ph.D. students Thierry Moreau and Haichen Shen; professors Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy; and Ziheng Jiang, an undergraduate student at Fudan University and intern at AWS.

“With TVM, we can quickly build a comprehensive deep learning software framework on top of novel hardware architectures,” said Moreau, whose research focuses on computer architecture. “TVM will help catalyze hardware-software co-design in the field of deep learning research.”

“Researchers always try out new algorithms in deep learning, but high-performance libraries usually fall behind,” added Shen, a Ph.D. student in systems. “TVM now helps researchers quickly optimize their implementation for new algorithms, and thus accelerates the adoption of new ideas.”

TVM is the base layer to a complete deep learning intermediate representation (IR) stack: it provides a reusable toolchain for compiling high-level neural network algorithms down to low-level machine code that is tailored to a specific hardware platform. The team drew upon the wisdom of the compiler community in building the framework, constructing a two-level intermediate layer consisting of NNVM, which is a high-level IR for task scheduling and memory management, and TVM, an expressive low-level IR for optimizing compute kernels. TVM is shipped with a set of reusable optimization libraries that can be tuned at will to fit the needs of various hardware platforms, from wearables to high-end cloud compute servers.

“Efficient deep learning needs specialized hardware,” Ceze noted. “Being able to quickly prototype systems using FPGAs and new experimental ASICs is of extreme value.”

“With today’s release, we invite the academic and industry research communities to join us in advancing the state of the art in machine learning and hardware innovation,” said Guestrin .

In preparation for the public release, the team sought early contributions from Amazon, Qihoo 360, Facebook, UCDavis, HKUST, TuSimple, SJTU, and members of DMLC open-source community.

“We have already had a terrific response from Amazon, Facebook, and several other early collaborators,” said Krishnamurthy. “We look forward to unleashing developers’ creativity and building a robust community around TVM.”

To learn more, read the technical blog and visit the TVM github page.

August 17, 2017

First-Choice Majors of UW Confirmed Incoming Freshmen

The trend continues – at the University of Washington, and across the nation. The two charts shown here tell the story: they show the ten top first-choice majors of UW confirmed incoming freshmen from fall 2010 to fall 2017, and the first-choice College of Engineering majors of UW confirmed incoming freshmen over the same period.

August 17, 2017

« Newer PostsOlder Posts »