Skip to main content

UW’s Sounding Board named a finalist for $2.5 million Amazon Alexa Prize

Sounding Board team

Left to right: Noah Smith, Maarten Sap, Ari Holtzman, Mari Ostendorf, Elizabeth Clark, Hao Fang, Yejin Choi, and Hao Cheng

A team of researchers from the Allen School and University of Washington Department of Electrical Engineering has been named a finalist for Amazon’s inaugural Alexa Prize. The UW team, which is the only North American competitor to make it to the next round, earned one of only three available places for Sounding Board, a conversational agent developed to engage users in thoughtful, informative discussion and transform how people interact with everyday devices in their homes.

The Alexa Prize is a $2.5 million university competition designed to encourage the development of conversational artificial intelligence (AI). The goal is to produce a socialbot — an AI capable of coherent conversation with humans — that is able to converse about popular topics and current events for 20 minutes. Teams built their socialbots using the Alexa Skills Kit and receive continuous, real-world feedback from millions of Amazon customers that have interacted with them anonymously through Alexa.

Sounding Board combines expertise in deep learning, reinforcement learning, language generation, psychologically-infused natural language processing, and human-AI collaboration. The team is led by EE Ph.D. student Hao Fang, working with EE professor and lead advisor Mari Ostendorf, with contributions from EE Ph.D. student Hao Cheng, Allen School Ph.D. students Elizabeth Clark, Ari Holtzman, and Maarten Sap, and professors Yejin Choi and Noah Smith of the Allen School’s Natural Language Processing research group.

More than 100 teams from universities in 22 countries applied to be part of the inaugural competition. The finalists were selected from among 12 semifinalists whose socialbots were evaluated based on customer ratings of their interactions during hundreds of thousands of conversations between July 1 and August 15. In a blog post, Amazon’s Ashwin Ram, senior manager for Alexa AI, announced that UW’s Sounding Board team and the Alquist team from the Czech Technical University in Prague received the highest average customer ratings, earning them a place in the finals. What’s up Bot from Heriot-Watt University in Edinburgh, Scotland earned the wildcard slot.

The winner of the Alex Prize will be announced at AWS re:Invent 2017 in November in Las Vegas, Nevada. Amazon will publish participating teams’ technical papers in the Alexa Prize Proceedings as a way of sharing their work with the broader research community.

Read more about the finalists in GeekWire and Xconomy, and contribute to the development of conversational AI by saying “Alexa, let’s chat” to interact with the finalists’ socialbots.

Go team!

September 1, 2017

Snap a selfie, screen for pancreatic cancer with new app from UW researchers

Alex Mariakakis demonstrates BiliScreen app using 3-D printed boxThey say that the eyes are a window to the soul; with a new smartphone app developed by researchers at the University of Washington, they are now also a window to one’s health. Members of the Allen School’s UbiComp Lab, working with clinicians at UW Medicine, developed BiliScreen to enable anyone to snap a selfie and get screened for pancreatic cancer. The app detects adult jaundice — an early symptom of the disease — to enable more timely intervention and better outcomes for people at risk of developing one of the deadliest forms of cancer.

Jaundice is a yellowing of the skin and eyes produced when excess bilirubin is present in the bloodstream. The condition can be an indicator of a variety of diseases, including pancreatic cancer and hepatitis. In the case of the former, by the time a patient’s jaundice becomes visible, the condition often has advanced past the point of successful treatment.

“The eyes are an interesting gateway to the body,” said Shwetak Patel, director of the UbiComp Lab and a professor in the Allen School and UW Department of Electrical Engineering, in a UW News release. “Our question was: Could we capture some of these changes that might lead to earlier detection with a selfie?”

It turns out that they could by combining a smartphone’s camera, computer vision, and machine learning. Using a selfie of someone’s eyes, BiliScreen applies computer vision algorithms to the image to isolate the sclera, or white part of the eye, for analysis. It then calculates color information from the pixels of the sclera, discarding blood vessels and eyelashes in the process, and uses machine learning to correlate the color descriptor with bilirubin levels to determine whether that person has jaundice.

The research team measured BiliScreen’s effectiveness in a clinical study involving 70 people, with the help of a 3-D printed box — similar to a Google Cardboard headset — to control for variations in lighting conditions. BiliScreen correctly identified jaundice nearly 90 percent of the time compared to the standard, more invasive blood test. The app can also be used with a second accessory, paper glasses printed with colored squares to calibrate for differences in ambient lighting. The ultimate goal, however, is to remove the need for any accessories so that people can use the app anytime, anywhere, to instantly screen for a disease that has a five-year survival rate of just nine percent.

“By the time you’re symptomatic, it’s frequently too late,” noted Allen School Ph.D. student Alex Mariakakis, lead author of the research paper. “The hope is that if people can do this simple test once a month — in the privacy of their own homes — some might catch the disease early enough.”

Patel and Mariakakis developed BIliScreen working with UW Medicine professors James Taylor and Lei Yu, Allen School undergraduate Megan Banks, and research study coordinator Lauren Phillipi. The project builds on an earlier collaboration between the UbiComp Lab and UW Medicine that produced BiliCam, an app that is effective in screening for newborn jaundice that is the subject of a recent article published in the journal Pediatrics.

BiliScreen will be presented at the UbiComp 2017 conference next month in Maui, Hawaii.

Read the UW News release here, the research paper here, and visit the BiliScreen project page here. Check out coverage of BiliScreen by IEEE Spectrum, GeekWire, PBS News Hour, The Independent, and Vice News.

Photo credit: Dennis Wise/University of Washington

August 28, 2017

UW’s Sham Kakade and Maryam Fazel earn NSF TRIPODS Award to advance the state of the art in data science

Sham Kakade and Maryam FazelA team of University of Washington researchers co-led by Sham Kakade, a professor in the Allen School and Department of Statistics, and Electrical Engineering Professor Maryam Fazel have secured a $1.5 million award from the National Science Foundation (NSF) to develop new algorithmic tools that will advance the state of the art in data science. The funding will support the researchers’ project titled “Algorithms for Data Science: Complexity, Scalability, and Robustness” as part of the agency’s Transdisciplinary Research in Principles of Data Science (TRIPODS) program.

TRIPODS was designed to engage members of the theoretical computer science, mathematics and statistics communities in developing the theoretical foundations of data science to promote data-driven discovery. The UW proposal aims to produce a common language and set of design principles to guide the development of new algorithmic tools that will automate the process of extracting robust insights from vast troves of data.

“Modern data science challenges transcend the ideas of any single discipline, which is what makes this work so exciting,” Kakade said. “With the growing availability of large datasets and increasing computational resources, we need more robust algorithmic tools to address contemporary data science challenges — and we believe a unifying approach is needed to overcome those challenges, accelerate the pace of scientific discovery and generate solutions to real-world problems.”

In addition to developing the language for data-driven discovery, the researchers intend to train students and postdoctoral scholars to be well-versed in the disciplines that underpin data science and incorporate theoretical ideas into a data science curriculum.

“Our project is unique in that it places mathematical optimization theory at the heart of this endeavor, bridging across computer science, mathematics, and statistics,” said Fazel, who holds adjunct appointments in the Allen School, Statistics, and the Department of Mathematics. “The project covers both high-impact interdisciplinary research and institutional activities such as workshops and boot camps to train students with novel techniques from all three disciplines. Ultimately, the project could serve as a springboard for building a full-fledged NSF institute in Phase II of the program.”

Left to right: Dmitriy Drusvyatskiy, Zaid Harchaoui, and Yin Tat Lee

Fazel’s research spans optimization, machine learning, signal processing, and system identification. Kakade, who is the Washington Research Foundation Data Science Chair at UW and an adjunct faculty member in Electrical Engineering, focuses on theoretical and applied questions in machine learning and artificial intelligence. They are joined on the project by three co-principal investigators: Mathematics Professor Dmitriy Drusvyatskiy, Statistics Professor Zaid Harchaoui, and Allen School Professor Yin Tat Lee.

The project is one of 12 proposals that received TRIPODS grants. The awards represent the NSF’s first major investment in “Harnessing the Data Revolution,” one of 10 “big ideas” the agency has identified as critical for future investment.

“These new TRIPODS projects will help build the theoretical foundations of data science that will enable continued data-driven discovery and breakthroughs across all fields of science and engineering,” said Jim Kurose, assistant director for Computer and Information Science and Engineering (CISE) at NSF, in a press release.

Read the NSF announcement here, and the UW team’s abstract here.

August 24, 2017

Allen School’s open-source TVM framework bridges the gap between deep learning and hardware innovation

Illustration of the gap between deep learning frameworks and different types of hardwareDeep learning has become increasingly indispensable for a broad range of applications, including machine translation, speech and facial recognition, drug discovery, and social media filtering. This growing reliance on deep learning has been fueled by a combination of increased computational power, decreased data storage costs, and the emergence of scalable deep learning systems like TensorFlow, MXNet, Caffe and PyTorch that enable companies and organizations to analyze and extract value from vast amounts of data with the help of neural networks.

But existing systems have limitations that hinder their deployment across a range of devices. Because they are built to be optimized for a narrow range of hardware platforms, such as server-class GPUs, it takes considerable engineering effort and expense to adapt them for other platforms — not to mention provide ongoing support. The Allen School’s novel TVM framework aims to bridge that gap between deep learning systems, which are optimized for productivity, and the multitude of programming, performance and efficiency constraints enforced by different types of hardware.

With TVM, researchers and practitioners in industry and academia  will be able to quickly and easily deploy deep learning applications on a wide range of systems, including mobile phones, embedded devices, and low-power specialized chips — and do so without sacrificing battery power or speed.

“TVM acts as a common layer between the neural network and hardware back end, eliminating the need to build a separate infrastructure optimized for each class of device or server,” explained project lead Tianqi Chen, an Allen School Ph.D. student who focuses on machine learning and systems. “Our framework allows developers to quickly and easily deploy and optimize deep learning systems on a multitude of hardware devices.”

Portraits of researchers who developed the TVM frameworkTVM was developed by a team of researchers with expertise in machine learning, systems and computer architecture. In addition to Chen, the team includes Allen School Ph.D. students Thierry Moreau and Haichen Shen; professors Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy; and Ziheng Jiang, an undergraduate student at Fudan University and intern at AWS.

“With TVM, we can quickly build a comprehensive deep learning software framework on top of novel hardware architectures,” said Moreau, whose research focuses on computer architecture. “TVM will help catalyze hardware-software co-design in the field of deep learning research.”

“Researchers always try out new algorithms in deep learning, but high-performance libraries usually fall behind,” added Shen, a Ph.D. student in systems. “TVM now helps researchers quickly optimize their implementation for new algorithms, and thus accelerates the adoption of new ideas.”

TVM is the base layer to a complete deep learning intermediate representation (IR) stack: it provides a reusable toolchain for compiling high-level neural network algorithms down to low-level machine code that is tailored to a specific hardware platform. The team drew upon the wisdom of the compiler community in building the framework, constructing a two-level intermediate layer consisting of NNVM, which is a high-level IR for task scheduling and memory management, and TVM, an expressive low-level IR for optimizing compute kernels. TVM is shipped with a set of reusable optimization libraries that can be tuned at will to fit the needs of various hardware platforms, from wearables to high-end cloud compute servers.

“Efficient deep learning needs specialized hardware,” Ceze noted. “Being able to quickly prototype systems using FPGAs and new experimental ASICs is of extreme value.”

“With today’s release, we invite the academic and industry research communities to join us in advancing the state of the art in machine learning and hardware innovation,” said Guestrin .

In preparation for the public release, the team sought early contributions from Amazon, Qihoo 360, Facebook, UCDavis, HKUST, TuSimple, SJTU, and members of DMLC open-source community.

“We have already had a terrific response from Amazon, Facebook, and several other early collaborators,” said Krishnamurthy. “We look forward to unleashing developers’ creativity and building a robust community around TVM.”

To learn more, read the technical blog and visit the TVM github page.

August 17, 2017

First-Choice Majors of UW Confirmed Incoming Freshmen

The trend continues – at the University of Washington, and across the nation. The two charts shown here tell the story: they show the ten top first-choice majors of UW confirmed incoming freshmen from fall 2010 to fall 2017, and the first-choice College of Engineering majors of UW confirmed incoming freshmen over the same period.

August 17, 2017

Allen School researchers reveal how smart devices can be turned into surveillance devices with music

CovertBand demo

Researchers from the Allen School’s Networks & Mobile Systems Lab and Security and Privacy Research Lab teamed up on a new project, CovertBand, to demonstrate how smart devices can be converted into surveillance tools capable of secretly tracking the body movements and activities of users and their companions. CovertBand turns off-the-shelf devices into active sonar systems with the help of acoustic pulses concealed in music. The team’s findings reveal how increasingly popular smart home assistants and other connected devices could be used to compromise users’ privacy in their own homes — even from half a world away.

“Most of today’s smart devices including smart TVs, Google Home, Amazon Echo and smartphones come with built-in microphones and speaker systems — which lets us use them to play music, record video and audio tracks, have phone conversations or participate in videoconferencing,” Allen School Ph.D. student and co-lead author Rajalakshmi Nandakumar told UW News. “But that also means that these devices have the basic components in place to make them vulnerable to attack.”

As fellow author and Ph.D. student Alex Takakuwa points out, “Other surveillance approaches require specialized hardware. CovertBand shows for the first time that through-barrier surveillance is possible using no hardware beyond what smart devices already have.”

CovertBand relies on repetitive acoustic pulses in the range of 18 to 20 kHz. While that is typically low enough that most adults are unlikely to pick up on the signals, young people and pets might — and an audible volume is required for more distant surveillance or to pick up activity through walls. To get around this, the team found that they could disguise the pulses under a layer of music, with repetitive, percussive beats the most effective at hiding the additional sound.

Tadayoshi Kohno, Rajalakshmi Nandakumar, Shyam Gollakota

Left to right: Tadayoshi Kohno, Rajalakshmi Nandakumar, and Shyam Gollakota (Not pictured: Alex Takakuwa)

“To our knowledge, this is the first time anyone has demonstrated that it is possible to convert smart commodity devices into active sonar systems using music,” said Allen School professor and co-author Shyam Gollakota.

By connecting a smartphone to a portable speaker or flat-screen TV, the researchers discovered they could use the data collected through CovertBand to accurately identify repetitive movements such as walking, jumping, and exercising up to a distance of six meters within line of sight, and up to three meters through walls. Having proven the concept, researchers believe a combination of more data and the use of machine learning tools would enable rapid classification of a greater variety of movements — and perhaps enable the identification of the individual making them.

With CovertBand, Allen School researchers have identified a plausible threat, given the increasing ubiquity of these devices in our pockets and in our living rooms. But our embrace of emerging technologies needn’t end on a sour note. As professor and co-author Tadayoshi Kohno points out, when it comes to cybersecurity, knowledge is power.

“We’re providing education about what is possible and what capabilities the general public might not know about, so that people can be aware and can build defenses against this,” he said.

The researchers will present a paper detailing their findings at the Ubicomp 2017 conference in Maui, Hawaii next month.

Read the full UW News release here. Learn more and listen to samples of the CovertBand attack music on the project web page here. Check out articles on CovertBand in Fast CompanyDigital Trends, New Atlas, and The Register.

August 16, 2017

Allen School professor Franziska Roesner recognized with TR35 Award

Franziska RoesnerAllen School professor Franziska Roesner has been recognized with a 2017 TR35 Award, MIT Technology Review’s annual celebration of the world’s 35 top innovators under the age of 35. Roesner is honored in the “Inventors” category, recognizing the visionary individuals who are creating the breakthroughs and building the technologies that will shape the future.

Roesner co-directs the Allen School’s Security and Privacy Research Lab, where she analyzes the security and privacy risks of existing and emerging technologies and develops tools to safeguard end users. She is also a member of the University of Washington’s interdisciplinary Tech Policy Lab.

She is the first computer scientist to analyze the risks associated with augmented reality (AR) technologies in order to support the design of systems that mitigate vulnerabilities in these emerging platforms. These technologies are becoming increasingly popular, not only for entertainment but also for assistive purposes, such as heads-up windshield displays in cars. When Roesner began studying them in 2011, products such as Google Glass had not been announced yet and such technologies were still largely in the realm of science fiction. Roesner’s research covers issues associated with both inputs and outputs, from the potentially sensitive sensor data these platforms collect on users in the course of their interactions, to the impact of visual ad content on the safety of users and bystanders. Her impact in AR and virtual reality (VR) extends beyond the lab: her research has made her a go-to source for other researchers, government regulators, and industry leaders on how to counter the privacy, security, and safety risks in order to realize the full potential of these emerging technologies.

Web privacy and security is another area in which Roesner has produced pioneering research that has had a lasting impact on users. In 2011, when web tracking was a nascent concern, she produced the first comprehensive measurement of third-party tracking on the web. More recently, her team studied the evolution of tracking methods over a 20-year period, from 1996 to 2016 using a novel tool called Tracking Excavator. Roesner previously built a new anti-tracking tool, ShareMeNot, whose code was incorporated into the Electronic Frontier Foundation’s PrivacyBadger browser add-on. PrivacyBadger and other add-ons that incorporated ShareMeNot’s ideas are used by millions of people to safeguard their privacy online.

Another user group that has benefitted from Roesner’s user-centric research is journalists and others who rely on secure communication with sources, clients, and colleagues. After hearing stories like how it took reporter Glenn Greenwald months to establish a secure email connection with source Edward Snowden, she collaborated with experts from the journalism community on a study of the computer security needs of journalists and lawyers. Based on those findings, Roesner spearheaded the development of Confidante, a usable encrypted email client that offers the security of traditional encryption technologies without the friction of traditional key management and verification.

“Ideally, we’d like to design and build security and privacy tools that actually work for end users. But to do that, we need to engage with those users, to understand what they need, and not build technology in isolation,” Roesner told UW News.

“As our technologies progress and become even more integral to our lives, the push to consider privacy and security issues will only increase,” she said.

Before joining the UW faculty in 2014, Roesner earned her Ph.D. and Master’s degree from the Allen School working with professor Tadayoshi Kohno, and bachelor’s degrees in computer science and liberal arts from the University of Texas at Austin.

Since 1999, MIT Technology Review has published its annual list of “Innovators Under 35” recognizing exceptional early-career scientists and technologists whose research has the potential to change the world. Past TR35 honorees include Allen School faculty members Shyam Gollakota and Kurtis Heimerl (2014), Jeffrey Heer and Shwetak Patel (2009), and Tadayoshi Kohno (2007), and alumni Kuang Cheng (2014), Noah Snavely (2011), Scott Saponas (2010), Jeffrey Bigham and Adrien Treuille (2009), and Karen Liu and Tapan Parikh (2007).

View Roesner’s TR35 profile here and the full list of 2017 TR35 recipients here.

Congratulations, Franzi!

August 16, 2017

Allen School faculty and alumni gather at annual DARPA ISAT meeting

Every August, the members of the Defense Advanced Research Projects Agency’s Information Science and Technology study group (DARPA ISAT) gather in Woods Hole, Massachusetts for their annual summer meeting to discuss the future of computing and communications technologies.

As professor and past ISAT Chair Ed Lazowska notes, the Allen School tends to be “wonderfully over-represented” at these meetings and on the ISAT study group in general. No fewer than 11 current members have Allen School connections, including three newly appointed to the group this year: professor Jeffrey Heer, director of the Allen School’s Interactive Data Lab, and alumni Ed Felten (Ph.D., ’93), a member of the faculty at Princeton University, and Emin Gun Sirer (Ph.D., ’02), a faculty member at Cornell University.

Since 1987, the U.S. Department of Defense has relied on the 30 scientists and engineers who serve on the DARPA ISAT study group to provide ongoing, independent assessment of the state of advanced information science and technology and to recommend new directions in research.

Allen School faculty and alumni at the 2017 DARPA ISAT study group meeting

The Allen School/DARPA ISAT family. Front row, left to right: bachelor’s alumnus Hakim Weatherspoon (professor at Cornell); professors Ras Bodik, Luis Ceze, and Jeffrey Heer; and Ph.D. alumnus Brandon Lucia (professor at Carnegie Mellon). Back row, left to right: Ph.D. alumna Roxana Geambasu (professor at Columbia); professor Ed Lazowska; adjunct professor Tom Daniel (UW Biology); and Ph.D. alumnus Ed Felten (professor at Princeton). (Not present: professor and Ph.D. alumna Franziska Roesner and Ph.D. alumnus Emin Gun Sirer (professor at Cornell).)

August 15, 2017

Allen School researchers expose cybersecurity risks of DNA sequencing software

Lee Organick, Karl Koscher, and Peter Ney prepare the DNA exploit

Left to right: Lee Organick, Karl Koscher, and Peter Ney prepare the DNA exploit.

In an illustration of just how narrow the divide between the biological and digital worlds has become, a team of researchers from the Allen School released a study revealing potential security risks in software commonly used for DNA sequencing and analysis — and demonstrated for the first time that it is possible to infect software systems with malware delivered via DNA molecules. The team will present its paper, “Computer Security, Privacy, and DNA Sequencing: Compromising Computers with Synthesized DNA, Privacy Leaks, and More,” at the USENIX Security Symposium in Vancouver, British Columbia next week.

Many open-source systems used in DNA analysis began in the cloistered domain of the research lab. As the cost of DNA sequencing has plummeted, new medical and consumer-oriented services have taken advantage, leading to more widespread use — and with it, potential for abuse. While there is no evidence to indicate that DNA sequencing software is at imminent risk, the researchers say now would be a good time to address potential vulnerabilities.

“One of the big things we try to do in the computer security community is to avoid a situation where we say, ‘Oh shoot, adversaries are here and knocking on our door and we’re not prepared,’” said professor Tadayoshi Kohno, co-director of the Security and Privacy Research Lab, in a UW News release.

Tabloid headline: "Computer Virus Spreads to Humans!"

Researcher Tadayoshi Kohno wondered if what this tabloid headline suggested would work in reverse: Could DNA be used to deliver a virus to a computer?

Kohno and Karl Koscher (Ph.D., ’14), who works with Kohno in the Security and Privacy Research Lab, have been down this road before — literally as well as figuratively. In 2010, they and a group of fellow UW and University of California, San Diego security researchers demonstrated that it was possible to hack into modern automobile systems connected to the internet. They have also explored potential security vulnerabilities in implantable medical devices and household robots.

Kohno conceived of this latest experiment after he came across an online discussion about a tabloid headline in which a person was alleged to have been infected by a computer virus. While he wasn’t about to take that fantastical storyline at face value, Kohno was curious whether the concept might work in reverse.

Kohno, Koscher, and Allen School Ph.D. student Peter Ney — representing the cybersecurity side of the equation — teamed up with professor Luis Ceze and research scientist Lee Organick of the Molecular Information Systems Lab, where they are working on an unrelated project to create a DNA-based storage solution for digital data. The group decided not only would they analyze existing software for vulnerabilities; they would attempt to exploit them.

“We wondered whether under semi-realistic circumstances it would be possible to use biological molecules to infect a computer through normal DNA processing,” Ney said.

As it turns out, it is possible. The team introduced a known vulnerability into software they would then use to analyze the DNA sequence. They encoded a malicious exploit within strands of synthetic DNA, and then processed those strands using the compromised software. When they did, the researchers were able to execute the encoded malware to gain control of the computer on which the sample was being analyzed.

While there are a number of physical and technical challenges someone would have to overcome to replicate the experiment in the wild, it nevertheless should serve as a wake-up call for an industry that has not yet had to contend with significant cybersecurity threats. According to Koscher, there are steps companies and labs can immediately take to improve the security of their DNA sequencing software and practice good “security hygiene.”

Onscreen output of DNA sequencing machine,

This output from a DNA sequencing machine includes the team’s exploit.

“There is some really low-hanging fruit out there that people could address just by running standard software analysis tools that will point out security problems and recommend fixes,” he suggested. For the longer term, the group’s recommendations include employing adversarial thinking in setting up new processes, verifying the source of DNA samples prior to processing, and developing the means to detect malicious code in DNA.

The team emphasized that people who use DNA sequencing services should not worry about the security of their personal genetic and medical information — at least, not yet. “Even if someone wanted to do this maliciously, it might not work,” Organick told UW News.

While Ceze admits he is concerned by what the team discovered during their analysis, it is a concern that is largely rooted in conjecture at this point.

“We don’t want to alarm people,” Ceze pointed out. “We do want to give people a heads up that as these molecular and electronic worlds get closer together, there are potential interactions that we haven’t really had to contemplate before.”

Visit the project website and read the UW News release to learn more.

Also see coverage in Wired, The Wall Street Journal, MIT Technology Review, The Atlantic, TechCrunch, Mashable, Gizmodo, ZDNet, GeekWire, Inverse, IEEE Spectrum, and TechRepublic.

August 10, 2017

Allen School’s Zachary Tatlock and Neutrons software verification project featured in Communications of the ACM

Zach Tatlock

Zachary Tatlock

Verifying that software runs safely and reliably is a mandate for mission-critical systems ranging from avionics to automobiles. A system of special significance to professor Zachary Tatlock and colleagues in the Allen School’s Programming Language and System Engineering (PLSE) group is the Clinical Neutron Therapy System (CNTS) at the UW Medical Center. One of only three radioactive therapy systems of its kind in the United States, the CNTS directs powerful radiation to patients’ heads to treat cancers of the tongue and esophagus. As Tatlock notes, “even a small mistake can be potentially deadly.”

In an article featured in the latest issue of Communications of the ACM, Tatlock discusses how he and a team of UW researchers are working to prevent such mistakes through the Neutrons project. The Neutrons project builds on Tatlock’s long-standing interest in devising and improving techniques to ensure that programs remain error free. Verifying systems like the CNTS presents its own set of challenges: the diversity and complexity of the radiotherapy system’s dozen components, each with its own level of criticality, meant that no single tool was suitable for checking critical component properties while also ensuring that their composition implies critical system properties.

Working with Dr. Jonathan Jacky, a radiation oncologist at UW, Tatlock and a team of Allen School researchers that includes professors Emina Torlak, Xi Wang, and Michael Ernst, and graduate students Stuart Pernsteiner and Calvin Loncaric applied modern software verification techniques to construct the first mechanically-checked safety case for a real safety-critical system in clinical use. The safety case includes a detailed formal model of the CNTS and a set of tools for establishing component properties specified by the model. Leveraging existing formal tools, the researchers built the entire case by writing just 2,700 lines of code.

CNTS control panel

CNTS control panel

The CNTS had been operating for over 30 years without incident. However, in constructing the case, Tatlock and the team revealed serious flaws that could potentially result in the beam operating outside of prescribed settings. They also discovered flaws in the implementation of the system’s EPICS language, which they reported to the CNTS staff.

Their work demonstrated that formal, checkable safety cases can provide significant practical benefits by focusing analysis effort on deep properties of system components that matter for the safety of the system as a whole.

“What we want to be able to do is ensure the reliability of all the pieces,” Tatlock explained in the article. “We want to make sure there are no bugs that can affect the parts that are critical.”

The Neutrons project is just one of several undertaken by PLSE researchers that use verification techniques described in the article, including Oeuf, which applies proof assistance to compilers; Verdi, for verifying distributed systems; and Bagpipe, for verifying internet router configurations.

Read the August 2017 ACM article here. Learn more about the Neutrons project in the team’s CAV 2016 paper here, and watch a short video about the project here.

August 8, 2017

« Newer PostsOlder Posts »