Skip to main content

UW’s Jennifer Mankoff, Batya Friedman and Jacob Wobbrock elected to CHI Academy

Allen School professor Jennifer Mankoff

Three University of Washington faculty who are recognized leaders in human-computer interaction (HCI) research — Allen School professor Jennifer Mankoff and Information School professors (and Allen School adjunct professors) Batya Friedman and Jacob Wobbrock — have been honored by the Association for Computing Machinery’s Special Interest Group on Computer-Human Interaction (SIGCHI) with election to the CHI Academy. The CHI Academy is composed of individuals who have made substantial, cumulative contributions to the field of HCI through the development of new research directions and innovations and have influenced the work of their peers. Mankoff, Friedman and Wobbrock are three of only eight new CHI Academy members elected this year!

Jennifer Mankoff, who holds the Richard E. Ladner Endowed Professorship at the Allen School, is a leading researcher in human-computer interaction who has devoted her career to promoting a digital future defined by inclusion and accessibility for all. As director of the Make4All Group and a member of the interdisciplinary DUB (Design, Use, Build) group, one of the ways in which she has sought to advance that goal is by revolutionizing and democratizing the production of assistive technologies using 3D printing and other advanced fabrication techniques.

In one of her recent projects, Mankoff worked with Allen School colleague Shyam Gollakota and members of the Networks & Mobile Systems Lab to develop the first 3D-printed objects capable of tracking and storing data about their use, with potential applications ranging from smart prescription pill bottles to customized prosthetic devices. She also led the development of a paradigm-shifting technology for screen readers — Spatial Region Interaction Techniques, or SPRITEs — in collaboration with colleagues at Carnegie Mellon University, where she was a faculty member before joining the Allen School. SPRITEs leverages a standard keyboard to make interactive web content more accessible for people who are blind or low-vision. In addition to her focus on accessibility, Mankoff has been a pioneer in applying computation to address societal challenges around sustainability, such as leveraging Internet-scale technologies to reduce energy consumption.

Mankoff’s work previously has been recognized with an Alfred P. Sloan Fellowship, an IBM Faculty Fellowship, a GVU Impact Award from her alma mater, Georgia Tech, and Best Paper awards at the ASSETS, CHI, and Mobile HCI conferences.

“It’s an honor to be included in the CHI Academy, and one I hope to live up to in my future research as much as in my past research,” Mankoff said. “I am passionate about creating accessible, inclusive systems and the engineering to make them feasible and deployable, and grateful for all the students and collaborators who have helped me to create them and to be recognized for this work.”

iSchool professor Batya Friedman

Joining Mankoff in the 2019 class of CHI Academy inductees are fellow DUB members Batya Friedman and Jacob Wobbrock, professors in the iSchool and adjunct professors in the Allen School.

Friedman is a pioneer of value sensitive design, an approach to developing technology that accounts for human values. Her work has influenced multiple fields beyond HCI, including computer security, architecture, civil engineering, law, transportation, and many others.

Wobbrock’s research aims to develop a scientific understanding of how people interact with technology and information. His work seeks to improve the quality of those interactions, particularly for people with disabilities, using human performance measurement and modeling, input and interaction techniques, accessible computing, and more.

“Jen, Batya, and Jake have helped build UW’s reputation as a center of excellence in HCI research and innovation,” said Hank Levy, director of the Allen School. “All three have made lasting contributions not just in HCI and computing, but also in many other fields in their quest to use technology to solve some of society’s greatest challenges. Their induction into the CHI Academy is a testament to their technical leadership and enduring impact by putting people first.”

iSchool professor Jacob Wobbrock

Current CHI Academy member and iSchool dean Anind Dey concurred. “Combining this with three members of the 2019 CHI Academy class being from the UW really solidifies the UW as a global leader in HCI,” he said in a related announcement. “The level of impact all three have had for such a sustained period of time is admirable, and makes them very deserving of recognition by the CHI Academy.”

The new inductees will be formally recognized at the CHI 2019 conference to be held in Glasgow, Scotland in May. Learn more about the 2019 SIGCHI Awards here, and read the related iSchool article here.

Congratulations to Jen, Batya, and Jake!

February 13, 2019

Allen School celebrates opening of NVIDIA’s new robotics research lab in Seattle

Allen School director Hank Levy welcomes NVIDIA to Seattle. Credit: NVIDIA

In yet another sign of the Puget Sound region’s emergence as a center of advanced robotics and artificial intelligence research, NVIDIA last week marked the official opening of its new AI Robotics Research Lab just blocks away from the Allen School and University of Washington’s Seattle campus. Led by Allen School professor Dieter Fox, NVIDIA’s new lab in the UW CoMotion building will bring together multidisciplinary teams to focus on the development of next-generation robots that can work safely and effectively alongside humans.

Many UW faculty and students were on hand to toast the new lab, including Allen School director Hank Levy. “I’d just like to say how excited we are to have NVIDIA here in Seattle,” Levy said in remarks welcoming the assembled guests to the lab for the first time. “At this moment, AI is changing the world, and NVIDIA’s hardware is a driving force in that movement.”

NVIDIA CEO Jensen Huang and a guest get hands-on with a touch-sensitive robot on display in the Seattle lab.

The new lab represents a couple of firsts for NVIDIA — not only is it the company’s first research outpost in Seattle, but it is also the very first NVIDIA research lab focused on robotics. Among the highlights of the 13,000 square-foot space is a working kitchen, complete with drawers, cabinets and appliances, in which the team hopes to whip up new capabilities in human-robot collaboration. “We want to ultimately get a robot that can cook a meal with you,” Fox said, “or that you can just talk to it and tell the robot what you want to do.”

To get there, Fox and his colleagues will need to make progress in a variety of areas, spanning artificial intelligence, robotic manipulation, machine learning, computer vision, natural language processing, and more. Noting that the new lab intends to publish its research so that it can be built upon by others — “we aren’t keeping it to ourselves” — Fox emphasized that its work will be a collaborative endeavor through which researchers from NVIDIA, UW, and other leading universities would push the field of robotics forward.

Allen School professors Ed Lazowska (left) and Dieter Fox. Under Fox’s leadership, the new lab intends to collaborate with researchers at UW and other leading universities and publish the results. Credit: NVIDIA

That sentiment was echoed by NVIDIA CEO Jensen Huang, who emphasized that the culture of collaboration that underpins UW research was also “the perfect culture for creating a robotics platform” in a city that he regards as “one of the greatest hubs of computer science in the world,” thanks to the presence of UW, Microsoft, Amazon, and many others.

“Robotics is going to change the world,” Levy said, “and having an NVIDIA lab this close to UW, started by Allen School professor Dieter Fox, gives us an incredible opportunity to work together to advance the state of the art. We are looking forward to long-term and successful collaborations between this lab, our faculty and students, and other members of the Seattle tech community.”

To learn more, read NVIDIA’s blog post and check out coverage by The Daily, GeekWire, MIT Tech Review, IEEE Spectrum, Robotics & Automation News, The Robot Report, Engadget, Hot Hardware, eTeknix, Neowin, SD Times, and the Puget Sound Business Journal (subscription required).

January 16, 2019

Mobile app developed by UW researchers offers people a “Second Chance” in the event of an opioid overdose

A person interacts with the Second Chance mobile app, activating the "monitor" function before using opioids
Credit: Mark Stone/University of Washington

Someone in the United States dies from an opioid overdose every 12 and a half minutes, according to data from the National Institute on Drug Abuse, and the rise in fatalities stemming from illicit opioid use is widely recognized as a public health epidemic. Many of these deaths could be prevented by rapid detection and intervention, including the administration of naloxone to reverse the effects of an overdose. Now, thanks to researchers in the Allen School’s Networks & Mobile Systems Lab and UW Medicine’s Department of Anesthesiology & Pain Medicine, a solution for preventing opioid-related deaths may be at hand. In a paper published today in Science Translational Medicine, the team describes a new contactless smartphone app capable of detecting signs of opioid overdose. Called Second Chance, the app converts the phone’s speaker and microphone into an active sonar system to unobtrusively monitor a person’s breathing and movements from distances of up to three feet, looking for patterns that indicate that they may be in danger.

According to one of those researchers, Allen School professor Shyam Gollakota, the ultimate goal in developing the app is not only to monitor a person’s condition, but eventually be able to connect users immediately with potentially life-saving treatment. “The idea is that people can use the app during opioid use so that if they overdose, the phone can potentially connect them to a friend or emergency services to provide naloxone,” Gollakota explained in a UW News release.

But first, the team had to develop a reliable algorithm that would work in real-world settings. Gollakota and his collaborators — Allen School Ph.D. student Rajalakshmi Nandakumar and Dr. Jacob Sunshine, a physician scientist at UW Medicine — looked northward to Insite, the first legal supervised injection site in North America. Located in Vancouver, British Columbia, Insite hosts approximately 500 supervised injections per day. There, Nandakumar and her colleagues were able to gather data on individuals’ breathing patterns before and after opioid injection in a safe setting — gaining valuable insights into how Second Chance might function in actual situations where opioids are being used.

“The participants prepares their drugs like they normally would, but then we monitored them for a minute pre-injection so the algorithm could get a baseline value for their breathing rate,” Nandakumar explained. “After we got a baseline, we continued monitoring during the injection and then for five minutes afterward, because that’s the window when overdose symptoms occur.”

Those symptoms might include cessation of breathing for 10 seconds or more — known as post-injection central apnea — and opioid induced respiratory depression, in which a person’s respiratory rate slows significantly to seven or fewer breaths per minute. In addition to measuring a person’s breathing, the app is capable of detecting changes in a person’s posture, such as a slumping of the head, which could indicate that they are in danger. In cases where someone is alone with no one to witness symptoms such as these, an app like Second Chance could be their only means of getting help. For this reason, the researchers couldn’t leave the effectiveness of their algorithm to chance; they had to be confident that the app would do what it was designed to in the wild. To that end, they looked for a way to test the app on symptoms consistent with an overdose without putting anyone at risk.

Overdose events at facilities like Insite are rare by design. But Sunshine, an anesthesiologist, knew of another type of facility where many of the same symptoms can be safely simulated: the operating room. “When patients undergo anesthesia, they experience much of the same physiology that people experience when they’re having an overdose,” he explained. “Nothing happens when people experience this event in the operating room because they’re receiving oxygen and they are under the care of an anesthesiology team.”

The smartphone interface shows the Second Chance app dialing 9-1-1 for emergency intervention after detecting signs of an overdose
Credit: Mark Stone/University of Washington

The team worked with Sunshine’s colleagues in UW Medicine to test the algorithm in what amounted to a real-world simulation of an overdose, with the help of healthy patients undergoing elective surgery who offered their informed consent. During their regularly scheduled procedures, the participants were administered standard anesthetic medications that prompted 30 seconds of slowing or stopped breathing while being monitored by Second Chance. In 19 out of 20 cases, the app correctly detected the symptoms that correlate to an overdose; in the one example that it didn’t, the breathing rate was just above the threshold of what would be considered a sign of overdose.

Having validated their approach, the researchers aim to commercialize Second Chance through Sound Life Sciences, Inc., a digital therapeutics company spun out of the University of Washington, and seek approval from the U.S. Food and Drug Administration (FDA). While the app is technically capable of measuring symptoms consistent with an overdose of any form of opioid, including prescription opioids taken by mouth, the researchers are quick to point out that so far, they have only tested it in scenarios involving use of illicit injectable opioids — the most common source of death by overdose. As Sunshine points out, it’s a human toll that is completely preventable with timely intervention, which as the name of the app suggests, would offer people the proverbial second chance.

“The goal of this project is to try to connect people who are often experiencing overdoses alone to known therapies that can save their lives,” he said. “We hope that by keeping people safer, they can eventually access long-term treatment.”

Learn more about Second Chance in the Science Translational Medicine paper here, the UW News release here, and a related UW Medicine story here. Check out coverage by Scientific American, MIT Technology Review, Science News, CNBC, Mother Jones, U.S. News & World Report, Axios, Futurism, CNET, Fast Company, Engadget, New Atlas, The Verge, Smithsonian, KOMO News, UPI, Tech Times, TechSpot, the Associated Press, and MD Magazine. Listen to Nandakumar discussing the Second Chance app on NPR’s Science Friday here, and watch a related Reuters video here.

January 9, 2019

Ras Bodik, Alec Wolman, and Aaron Hertzmann recognized as Fellows of the ACM for outstanding contributions to the field of computing

Association for Computing Machinery logo

Three members of the Allen School family were recently named Fellows of the Association for Computing Machinery (ACM) in recognition of their professional achievements. Professor Rastislav (Ras) Bodik of the Allen School’s Programming Languages & Software Engineering (PLSE) group, former postdoc Aaron Hertzmann of Adobe Research, and alumnus Alec Wolman (Ph.D., ‘02) of Microsoft Research were among the 56 ACM members worldwide to be recognized in the 2018 class of Fellows for their outstanding technical contributions in computing and information technology and their service to the computing community.

“In society, when we identify our tech leaders, we often think of men and women in industry who have made technologies pervasive while building major corporations,” said ACM President Cherri M. Pancake in a press release. “At the same time, the dedication, collaborative spirit and creativity of the computing professionals who initially conceived and developed these technologies goes unsung. The ACM Fellows program publicly recognizes the people who made key contributions to the technologies we enjoy.”

Ras Bodik

Ras Bodik portrait

In Ras Bodik’s case, those contributions center on his work in algorithmic program synthesis, a field that he helped start in the mid-2000’s. Program synthesis — sometimes referred to as automatic programming — simplifies the process of writing computer programs by asking the computer to search for a program that accomplishes user’s goals. Over the last 15 years, Bodik and his collaborators demonstrated practical applications of program synthesis by developing efficient algorithms and making them available to programmers by combining language design with new programmer interaction models.

“One of the benefits of program synthesis is that it makes computing more accessible, even as our systems increase in scale and complexity,” said Bodik. “By making it possible to start from an incomplete specification, such as user demonstrations, program synthesis opens up programming to novice users while it eases the process for those of us who write programs for a living.”

Bodik has shown program synthesis to be a versatile technique that can benefit experts and non-experts alike and attack problems with real-world impact. Bodik’s interest in program synthesis started at the University of Wisconsin, where he worked on mining program specifications from software corpuses. When he was a faculty member at the University of California, Berkeley, he and then-Ph.D. student Armando Solar-Lezama, now a member of the faculty at MIT, laid the groundwork for modern program synthesis through so-called program sketches and by reducing the search problem to SAT constraint solving. To help programmers create new synthesizers, he collaborated on Rosette, a solver-aided programming framework that was developed by Allen School professor Emina Torlak. Rosette demonstrated how program synthesis can make up for the absence of compilers in certain domains and avoid error-prone, low-level code by enabling programmers to produce domain-specific tools for verification, synthesis, and debugging.

An attractive feature of program synthesis is that it can be applied outside the realm of computer science to solve problems in cases where the solution can be modeled as a program — a feature that Bodik has been keen to exploit to generate practical solutions for scientists working in other domains. For example, he and his team have enabled the use of program synthesis by biologists to infer cellular models from mutation experiments, and by data scientists to simplify the layout of data visualizations.

As Bodik and his colleagues have expanded program synthesis into a broadening array of applications, they have taken an interdisciplinary approach to research that has led to advancements in multiple computing domains, including more efficient algorithms to enable synthesis of large SQL queries and the extension of programming by demonstration to the web browser. The Helena project, for example, enabled many teams of social scientists collect web datasets which they use to help city governments design new policies.

Bodik is also credited with spurring the rapid advancement of the synthesis community to where it has achieved parity with human programmers on at least a dozen tasks that typically require months of training. Half of those milestones are based on work produced by Bodik and his collaborators.

“We have reached the point where novice programmers can generate programs that function as well or better than those created by experts,” Bodik observed. “Program synthesis, especially when combined with other technologies for human-computer interaction, can be a great leveler.”

Aaron Hertzmann

Aaron Hertzmann portrait

Aaron Hertzmann, a former postdoc in the Allen School’s Graphics & Imaging Laboratory (GRAIL) from 2001 to 2002 and an affiliate faculty member since 2005, was recognized for his contributions spanning computer graphics, non-photo realistic rendering, computer animation, and machine learning. After leaving the Allen School, Hertzmann spent 10 years as a faculty member in the Computer Science Department at the University of Toronto before joining Adobe Research, where he is a principal scientist focused on computer vision and computer graphics.

Hertzmann is known for his work on new methods for extracting meaning from images and modeling the human visual system, as well as the creation of robust software tools for creating expressive, artistic imagery and animation in the style of human painting and drawing. Early in his career, as a member of New York University’s Media Research Laboratory, Hertzmann contributed new techniques for non-photorealistic rendering that combined the expressivity of those natural media with the flexibility of computer graphics. For example, he developed a method for painterly rendering to create images that appear hand-painted from photographs using multiple brush sizes and long, curved brush strokes. He extended the concept to video with new methods for “painting over” successive frames of animation to produce a novel visual style — work that he later built upon to produce AniPaint, an interactive system for generating painterly animation from video sequences that granted users more direct control over stroke synthesis. Among Hertzmann’s many other contributions to computing for art and design are new algorithms for generating line-art illustrations of smooth surfaces, a new method for computing the visible contours of a smooth 3D surface for stylization, and tools for automatically creating graphic design layouts and generating interactive layout suggestions.

Hertzmann has also worked on a number of projects for building computational models of human motion and for perceiving the 3D structure of people and objects in video that gained traction in the broader graphics and special effects communities. For example, Hertzmann contributed to the development of the style machine, a statistical model that can generate new motion sequences based on learned motion patterns from a series of motion capture sequences — the first system for “animation by example” that has since gained popularity. Other contributions include an inverse kinematics system that produces real-time human poses based on a set of constraints that subsequently was deployed in the gaming industry, and Nonlinear Inverse Optimization, a novel approach for generating realistic character motion based on a dynamical model derived from biomechanics.

More recently, Hertzmann has turned his attention to virtual reality (VR). One of his projects that has gained prominence is Vremiere, a system for enabling direct editing of spherical video in immersive environments. This work formed the basis of Adobe’s Project Clover in-VR video editing interface announced at its MAX Creative Conference in 2016 and earned the team a Best Paper Honorable Mention at last year’s ACM Conference on Computer-Human Interaction (CHI 2017). Hertzmann worked with the same group of collaborators to produce CollaVR, a system that enables collaborative review and feedback in immersive environments for multiple users.

“My post-doctoral experience at UW, and the long-term collaborations that arose from it, were some of the richest of my career,” Hertzmann said about his time with GRAIL. “They broadened my experience into several research areas that were new to me.”

Alec Wolman

Alec Wolman portrait

Alec Wolman earned his Ph.D. from the Allen School working with professors Hank Levy and Anna Karlin. He is a principal researcher in Microsoft’s Mobility and Networking Research Group, where he manages a small team of researchers and developers. His research interests span mobile systems, distributed systems, operating systems, internet technologies, security, and wireless networks. In naming him a 2018 Fellow, ACM highlighted his many contributions in the area of trusted mobile systems and services.

One of those contributions was fTPM — short for “firmware Trusted Platform Module” — which was implemented in millions of Windows smartphones and tablets and represented the first implementation of the TPM 2.0 specification. fTPM enables Windows on ARM SoC platforms to offer TPM-based security features including BitLocker, DirectAccess, and Virtual Smart Cards. fTPM leverages ARM TrustZone to implement these secure services on mobile devices and offers security guarantees similar to a discrete TPM chip — one of the most popular forms of trusted hardware in the industry. Wolman was also one of the researchers behind Trusted Language Runtime (TLR), a system that made it easy for smartphone app developers to build and run trusted applications while offering compatibility with legacy software and operating systems. In addition, he contributed to software abstractions for trusted sensors used in mobile applications; the cTPM system to extend trusted computing abstractions across multiple mobile devices; and Sentry, which protects sensitive data on mobile devices from low-cost physical memory attacks.

Wolman’s contributions in distributed systems and cloud services has had a significant impact on Microsoft’s products, serving many millions of users. Recently, he helped design and develop Microsoft Embedded Social (ES), a scalable Azure service that enables application developers to incorporate social engagement features within their apps in a fully customizable manner. ES has been incorporated in nearly a dozen applications and has served roughly 20 million users to date. Previously, Wolman co-developed the partitioning and recovery service (PRS) as a component of the Live Mesh file synchronization product. PRS, which enables data distribution across a set of servers with strong consistency as a reusable component, was later incorporated into the cloud service infrastructure for Windows Messenger and Xbox Live.

Wolman’s work on offloading computations — including Mobile Assistance Using Infrastructure (MAUI) and Kahawai — demonstrated how mobile devices can leverage both the cloud and edge computing infrastructure and are considered seminal pieces of research that influenced thousands of follow-on papers. Wolman and his team also collaborated with Allen School researchers on the development of MCDNN, a framework for mobile devices using deep neural networks without overtaxing resources such as battery life, memory, and data usage.

The Fellows Program is the ACM’s most prestigious member level and represents just one percent of the organization’s global membership. The ACM will formally honor the 2018 Fellows at its annual awards banquet to be held in San Francisco, California in June.

Learn more about the 2018 class of ACM Fellows here.

Congratulations to Ras, Aaron, and Alec!

January 8, 2019

Ph.D. student Ewin Tang recognized in Forbes’ “30 Under 30” in science for taking the “quantum” out of quantum computing

Ewin Tang

Allen School Ph.D. student Ewin Tang has landed a spot on Forbes’ 2019 list of “30 Under 30” in science for developing a method that enables a classical computer to solve the “recommendation problem” in roughly the same time that a quantum computer could — upending one of the most prominent examples of quantum speedup in the process. Her algorithm offers an efficient solution to a core machine learning problem which models the task of predicting user preferences from incomplete data based on people’s interactions with sites such as Amazon and Netflix.

Tang, who arrived at the University of Washington this past fall to work with professor James R. Lee in the Theory of Computation group, tackled the recommendation problem as an undergraduate research project while at the University of Texas at Austin. The project was an outgrowth of a quantum information course she took with UT Austin professor Scott Aaronson, who challenged her to prove that no fast classical algorithm exists for solving the problem. He was inspired to set this particular challenge after Iordanis Kerenidis and Anupam Prakash — researchers at Université Paris Diderot and University of California, Berkeley, respectively — published a quantum recommendation algorithm that could solve the problem exponentially faster than a classical computer could, in part by relying on a randomized sample of user preferences rather than attempting to reconstruct a full list, in fall 2016. But they did not prove definitively that no such classical algorithm existed.

Enter Tang and Aaronson. According to an article about Tang’s discovery that appeared last summer in Quanta Magazine, Aaronson believed that a comparable algorithm didn’t exist, and that his student would confirm Kerenidis’ and Prakash’s discovery. But as Tang worked through the problem during her senior year, she started to believe that, actually, it did exist. After Tang presented her results at a workshop at UC Berkeley that June, other members of the quantum computing community agreed — confirming it as the fastest-known classical algorithm and turning a two-year-old discovery on its head.

Or, as Tang recently explained to GeekWire, “We ended up getting this result in quantum machine learning, and as a nice side effect a classical algorithm popped out.”

The quantum algorithm relies on sampling to make the process of computing high-quality recommendations more efficient, built on the assumption that most users and their preferences can be approximated by their alignment with a small number of stereotypical user preferences. This approach enables the system to bypass reconstruction of the complete preference matrix, in which many millions of users and elements may be represented, in favor of honing in on the highest-value elements that matter most when giving recommendations. Tang employs a similar strategy to achieve similar results — proving that a classical algorithm can produce recommendations just as well – and nearly as fast – as the quantum algorithm can, without the aid of quantum computers.

According to Tang’s Ph.D. advisor, Lee, her achievement extends far beyond the original question she set out to answer.

“Ewin’s work provides more than just a (much) faster algorithm for recommendation systems — it gives a new framework for the design of algorithms in machine learning,” Lee said. “She and her collaborators are pursuing applications in clustering, regression, and principal component analysis, which are some of the most fundamental problems in the area. Her work is also a step toward clarifying the type of structure quantum algorithms must exploit in order to achieve an exponential speedup.”

Check out Tang’s Forbes profile here and a related GeekWire article here. Read the original story of her discovery in Quanta Magazine here.

Way to go, Ewin!

December 21, 2018

University of Washington researchers create a buzz with Living IoT system that replaces drones with bees

Bumblebee wearing the Living IoT backpack while foraging on a flower.

A team of researchers in the Networks & Mobile Systems Lab led by Allen School professor Shyam Gollakota and the Autonomous Insect Robotics (AIR) Laboratory led by Mechanical Engineering professor Sawyer Fuller have designed a new mobile platform that combines sensing, computation, and communication in a package small enough to be carried by a bumblebee. Dubbed Living IoT, the system allows nature to take its course while enabling new capabilities in agricultural and environmental monitoring.

Living IoT’s reliance on biological, rather than mechanical, flight presents new opportunities for continuous sensing without having to repeatedly recharge power-hungry batteries throughout the day. However, it did present some novel challenges for the team, not least of which was how create a form factor small enough and light enough to ride on the back of a bee that could still power data collection and communication.

“We decided to use bumblebees because they’re large enough to carry a tiny battery that can power our system, and they return to a hive every night where we could wirelessly recharge the batteries,” explained Vikram Iyer, a Ph.D. student in Electrical & Computer Engineering and co-primary author on the research paper.

The resulting design, which incorporates an antenna, envelope detector, sensor, microcontroller, backscatter transmitter, and rechargeable battery, weighs in at a minuscule 102 milligrams — roughly half of a bumblebee’s potential payload and less than the maximum weight the team determined the insect could carry without interfering with takeoff or controlled flight.

Vikram Iyer at the UW's urban farm with a one of the Living IoT bumblebees

“The rechargeable battery powering the backpack weighs about 70 milligrams,” noted Allen School Ph.D. student Rajalakshmi Nandakumar in a UW News release. “So we had a little over 30 milligrams left for everything else, like the sensors and the localization system to track the insect’s position.”

The battery offers seven hours of uninterrupted data collection time before it has to be recharged. As the bees fly around, their onboard sensors collect data such as temperature, humidity, and light intensity and store it for later upload back at the hive. That upload happens wirelessly using backscatter communication, a technique honed by members of the Networks & Mobile Systems Lab to enable a range of IoT applications.

To enable location-based sensing and data tracking in the absence of flight control, the researchers came up with a novel approach for self-localization that relies on passive operations in place of power-hungry radio receiver components. Instead, strategically positioned access point (AP) radios broadcast signals that are received by the bees as they go about their business. Using changes in the signal amplitude extracted from the onboard envelope detector, the team is able to determine the insect’s angle relative to each AP at various points throughout the day and triangulate its position to localize the sensor data. According to Allen School Ph.D. student Anran Wang, the system can detect the bee’s position within 80 meters of the antennas — roughly three-quarters of the length of a football field.

For the moment, the system is limited to roughly 30 kilobytes of data storage. If that can be expanded to include tiny cameras live streaming information about the condition of crops in the field or even the bees themselves, the notion of using live insects in place of drones for smart agriculture and other applications could really take off.

“Having insects carry these sensor systems around could be beneficial for farms because bees can sense things that electronic objects, like drones, cannot,” explained Gollakota. “With a drone, you’re just flying around randomly, while a bee is going to be drawn to specific things, like the plants it prefers to pollinate. And on top of learning about the environment, you can also learn a lot about how the bees behave.”

The team will present its research paper at MobiCom 2019, the Association for Computing Machinery’s 25th International Conference on Mobile Computing and Networking.

Read the UW News release here and visit the Living IoT project page here. Check out coverage in GeekWire, NBC MachCNBCTechCrunch, Digital TrendsViceMIT Technology ReviewNew Atlas, FuturityEngadget, Futurism, InverseForbesKOMO News, KUOW, The Seattle Times, Seattle PIBusiness Insider, Digital Journal, and IEEE Spectrum.

Photo credits: Mark Stone/University of Washington

December 13, 2018

MISL researchers earn Best Student Paper Award for designing and demonstrating content-based media search directly in DNA

Kendall Stewart onstage

Kendall Stewart presents a system for content-based media search in DNA

Researchers in the Molecular Information Systems Lab (MISL) have taken another step forward in their quest to develop a next-generation data storage system with the introduction of new mechanisms for content-based similarity search of digital data stored in synthetic DNA. The team, which includes researchers from the University of Washington and Microsoft, took home the Best Student Paper Award in recognition of its work from the 24th International Conference on DNA Computing and Molecular Programming (DNA 24) in October.

The winning paper describes an end-to-end DNA-based architecture for content-based similarity search of stored media — in this case, image files. The MISL team’s contribution includes a novel neural-network-based sequence encoder trained on more than 30,000 images from the Caltech256 image dataset, and a laboratory experiment demonstrating the technique on a set of 100 test images.

Instead of encoding and storing the complete image files, the researchers concentrated on building a database for storing and retrieving their associated metadata. Each image’s encoded DNA strand includes a “feature sequence” representing its semantic features, as well as an “ID sequence” pointing to the location of the complete file in another database.

By adapting a technique from the machine learning community called “semantic hashing” to work with DNA sequences, the team designed an encoder to output feature sequences that react more strongly with the feature sequences of similar images. This enables a molecular “fishing hook”: when a molecule representing a query image is added to the database, similar images react with and stick to the query. The resulting query-target pairs can then be extracted using standard laboratory techniques like magnetic bead filtration.

Diagram of the training methodology showing training images of binoculars and a sailboat.

The training methodology used for the sequence encoder. A neural network translates images into DNA sequences, which are compared for similarity. If the sequences are similar but the images are not – or vice versa – the neural network is updated to increase its accuracy.

According to Allen School Ph.D. student and lead author Kendall Stewart, this type of efficient, content-based search mechanism will be key to unlocking DNA’s potential as the digital storage medium of the future.

“We’re approaching a time when zettabytes of data will be produced each year — a daunting challenge that requires us to think beyond the current state of the art,” Stewart explained. “Our approach takes advantage of DNA’s near-data processing capabilities while borrowing from the latest machine learning techniques to present one possible solution for the indexing and retrieval of content-rich media.”

Stewart’s co-authors on the work include Microsoft researcher Yuan-Jyue Chen, MISL members David Ward and Xiaomeng “Aaron” Liu, Allen School and Electrical & Computer Engineering professor Georg Seelig, Allen School professor Luis Ceze, and Microsoft senior researcher and Allen School affiliate professor Karin Strauss.

The team validated its design in wet-lab experiments using 100 target images and 10 query images, with 10 similar images for each query image included in the target set. The results, Stewart said, were “moderately successful,” with visually similar files accounting for 30% of the sequencing reads for each query, despite comprising only 10% of the total database.

The researchers believe their approach can be generalized to databases containing any type of media. While it would be a challenge to scale such a system to larger and more complex datasets, the project opens up a promising new avenue of exploration around DNA-based data processing and content-based search.

Read the research paper here and learn more about MISL’s work here.

Congratulations to Kendall and her entire team!


December 10, 2018

Allen School’s Yin Tat Lee earns Best Paper Award at NeurIPS 2018 for new algorithms for distributed optimization

Yin Tat LeeA team of researchers that includes professor Yin Tat Lee of the Allen School’s Theory of Computation group has captured a Best Paper Award at the 32nd Conference on Neural Information Processing Systems (NeurIPS 2018). The paper presents two new algorithms that achieve optimal convergence rates for optimizing non-smooth convex functions in distributed networks, which are commonly used for machine learning applications to meet the computational and storage demands of very large datasets.

Performing optimization in distributed networks involves trade-offs between computation and communication time. Recent progress in the field has yielded optimal convergence rates and algorithms for optimizing smooth and strongly convex functions in such networks. In their award-winning paper, Lee and his co-authors extend the same theoretical analysis to solve an even thornier problem: how to optimize these trade-offs for non-smooth convex functions.

Based on its analysis, the team produced the first optimal algorithm for non-smooth decentralized optimization in the setting where the slope of each function is uniformly bounded. Referred to as multi-step primal dual (MSPD), the algorithm surpasses the previous state-of-the-art technique — the primal-dual algorithm — that offered fast communication rates in a decentralized and stochastic setting but without achieving optimality. Under the more challenging global regularity assumption, the researchers present a simple yet efficient algorithm known as distributed randomized smoothing (DRS) that adapts the existing randomized smoothing optimization algorithm for application in the distributed setting. DRS achieves a near-optimal convergence rate and communication cost.

Lee contributed to this work in his capacity as a visiting researcher at Microsoft Research and a faculty member at the Allen School. His collaborators include lead author Kevin Scaman, a research scientist at Huawei Technologies’ Paris-based machine learning lab; Microsoft senior researcher Sébastien Bubeck; and researchers Francis Bach and Laurent Massoulié of Inria, the French National Institute for Research in Computer Science and Automation. Members of the team will present the paper later today at the NeurIPS conference taking place this week in Montreal, Canada.

Read the research paper here.

Congratulations, Yin Tat!


December 6, 2018

Allen School kicks off Computer Science Education Week with and the Computer Science Teachers Association

Hadi Partovi onstage to kick off Computer Science Education Week’s Hadi Partovi kicks off the Computer Science Education Week celebration on the UW Seattle campus.

The Allen School teamed up with and the Computer Science Teachers Association (CSTA) on Monday to host the kickoff event marking the 2018 start of Computer Science Education Week and the largest-ever Hour of Code.  The event featured special guests Melinda Gates, co-chair and trustee of the Bill & Melinda Gates Foundation, and Brad Smith, president and chief legal officer of Microsoft.

Gates engaged in a question and answer session with a local student about her inspiration for pursuing computer science and how efforts to engage girls in computer science early are helping to address the gender gap in computing. In addition to giving a shout-out to her high school math teacher, Mrs. Susan Bauer, Gates acknowledged the efforts of the Allen School that have seen us increase our percentage of CS degrees awarded to women to nearly twice the national average. “This isn’t because of chance,” Gates noted. “It’s because they worked at it.”

Noting that she was lucky to have a teacher who introduced her to computer science 35 years ago, Gates hoped that today’s students wouldn’t have to rely on luck to discover computing thanks to programs like, Girls Who Code, and Black Girls Code. “Not every girl is going to have a Mrs. Bauer to introduce her to computing,” Gates said. “But if we build the right support system, she won’t need one.”

In addition to energizing the room full of teachers, administrators, and students about engaging in computer science learning, Gates joined co-founder and CEO Hadi Partovi and CSTA Executive Director Jake Baskin in celebrating the recipients of the Champions for Computer Science Awards. These awards recognize individuals and organizations that are working to make computer science education accessible to everyone — including 2018 honoree and Allen School professor emeritus Richard Ladner.

Richard Ladner accepting his award from Melinda Gates

Richard Ladner (left) accepts his Champions for Computer Science Award from Melinda Gates as Hadi Partovi (right) looks on

Ladner was named a Computer Science Champion based on his leadership in launching and growing AccessCSForAll, a set of curriculum and professional development tools that empower teachers and schools to make computer science accessible to K-12 students with disabilities. CSTA and credited Ladner and collaborators Andreas Stefik and the Quorum programming team, for their leadership in ensuring that the drive to increase CS educational opportunities is truly inclusive for all students. Happily, Ladner points out, the accessibility mindset is becoming more ingrained not just in education, but in industry, too — with more companies now focused on developing technologies and products that are built with accessibility in mind to serve the broadest community of users. “Accessibility is becoming mainstream,” Ladner said. “It’s not something one person does, it’s something everyone does.”

Smith — a longtime friend of the Allen School and a staunch advocate for computer science education, generally — ended the program on a high note by announcing Microsoft’s commitment of an additional $10 million to The funding will support the organization’s efforts to make CS-related professional development available to teachers in every school, as well as its advocacy for state-level policies that will increase access to computer science education for K-12 students across the nation.

Allen School ambassadors welcome students and parents to the 2018 Computing Open House

Allen School ambassadors welcome a group of students and parents to the 2018 Computing Open House

While today marked the official kickoff, the Allen School began celebrating Computer Science Education Week early, at our Computing Open House this past weekend. The open house is an annual tradition that coincides with both the start of CS Education Week and the birthday of computer programming pioneer Grace Hopper. More than 500 local students and parents descended upon the Paul G. Allen Center for Computer Science & Engineering for tours, research demos, and talks by industry representatives on how their computer science education helped them to launch fulfilling careers creating technologies that benefit people around the world. Students at the open house had a chance to learn about computer security, program an Edison robot, construct their own messages in binary and DNA, and get hands-on with the latest smartphone-based apps for mobile health.

Thanks to everyone who joined us, whether in person or online, to support computer science education — and congratulations to Richard and the team at Quorum for their much-deserved recognition!

Learn more about CS Education Week here and the Hour of Code here. View all of the Champions for Computer Science honorees here.


December 3, 2018

Allen School team earns Best Paper Award at SenSys 2018 for new low-power wireless localization system

Vikram Iyer and Rajalakshmi Nandakumar holding their SenSys 2018 Best Paper Awards

Vikram Iyer (left) and Rajalakshmi Nandakumar, recipients of the SenSys 2018 Best Paper Award for µLocate

Researchers in the Allen School’s Networks & Mobile Systems Lab have developed the first low-power 3D localization system for sub-centimeter sized devices. The system, µLocate, enables continuous object tracking on mobile devices at distances of up to 60 meters — even through walls — while consuming mere microwatts of power. The team behind µLocate was recognized with a Best Paper Award at the recent Conference on Embedded Networked Sensor Systems (SenSys 2018) organized by the Association for Computing Machinery in Shenzhen, China — demonstrating once again that good things really do come in small packages.

Localization is integral to the Internet of Things (IoT). While WiFi radios and ultra-wideband radios are capable of fulfilling this function, their power needs are such that they can only operate for about a month using smaller coin cell or button cell batteries. On the other hand, radio frequency identification tags (RFID) require significantly less power and are a suitable size, but their limited range and inconsistent operation through barriers make them unsuitable for use in whole-home and commercial IoT applications.

“When it comes to tracking IoT devices, existing localization systems typically hit a wall due to their functional limitations or outsize power needs that make them impractical for real-world use.” observed Allen School Ph.D. student Rajalakshmi Nandakumar, lead author on the paper. “µLocate overcomes these barriers by eliminating the need for bulky batteries while enabling us to localize across multiple rooms using devices that are smaller than a penny.”

Nandakumar and her colleagues — Electrical & Computer Engineering Ph.D. student Vikram Iyer and Allen School professor Shyam Gollakota — developed a novel architecture that builds on the lab’s previous, pioneering work on long-range backscatter. That project introduced the first low-power, wide-area backscatter communication system using chirp spread spectrum (CSS) modulation. For this latest iteration, the team’s hardware design consists of an access point and a receiver that incorporates a low-power microcontroller equipped with a built-in oscillator. The access point transmits the CSS signal, which the receiver shifts by 1-2 MHz before backscattering it to the access point. The system is designed to operate concurrently across the 900 MHz, 2.4 GHz, and 5 GHz ISM radio bands; the access point extracts and combines the phase information from the backscattered signals across the spectrum to localize the target.

The team designed two diminutive receiver prototypes, each of which could last between five and 10 years running on two button cell batteries: a multi-band prototype capable of operating across the aforementioned ISM bands at longer range, and an even smaller single-band prototype that operates at 5 GHz for short-range applications. The researchers evaluated µLocate in an open field and an office building to test its performance in line-of-sight and walled settings, respectively, demonstrating its ability to return a location value within 70 milliseconds at ranges of up to 60 meters. They also deployed the system in multiple real-world scenarios, including several single- and multi-story residences and across multiple rooms in a hospital wing.

“This was our lab’s first foray into both wireless device localization and the design of tiny, programmable devices,” noted Gollakota. “We’re excited that our work in this new line of research is already being recognized within the sensing community.”

To learn more about µLocate, read the research paper here.

Way to go, team!


November 29, 2018

« Newer PostsOlder Posts »