Skip to main content

Professor Franziska Roesner earns Consumer Reports Digital Lab Fellowship to support research into problematic content in online ads

Franziska Roesner smiling and leaning against a wood and metal railing
Credit: Dennis Wise/University of Washington

As anyone who has visited a website knows, online ads are taking up an increasing amount of page real estate. Depending on the ad, the content might veer from mildly annoying to downright dangerous; sometimes, it can be difficult to distinguish between ads that are deceptive or manipulative by design and legitimate content on a site. Now, Allen School professor Franziska Roesner (Ph.D., ‘14), co-director of the University of Washington’s Security and Privacy Research Lab, wants to shed light on problematic content in the online advertising ecosystem to support public-interest transparency and research.

Consumer Reports selected Roesner as a 2021-2022 Digital Lab Fellow to advance her efforts to create a public-interest online ads archive to document and investigate problematic ads and their impacts on users. With this infrastructure in place, Roesner hopes to support her team and others in developing new user-facing tools to combat the spread of misleading and potentially harmful ad content online. She is one of three public interest technology researchers to be named in the latest cohort of Digital Lab Fellows focused on developing practical solutions for addressing emerging consumer harms in the digital realm. 

This is not a new area of inquiry for Roesner, who has previously investigated online advertising from the perspective of user privacy such as the use of third-party trackers to collect information from users across multiple websites. Lately, she has expanded her focus to examining the actual content of those ads. Last year, amidst the lead-up to the U.S. presidential election and the pandemic’s growing human and economic toll — and against the backdrop of simmering arguments over the origins of SARS-CoV-2, lockdowns and mask mandates, and potential medical interventions — Roesner and a team of researchers unveiled the findings of a study examining the quality, or lack thereof, of ads that appear on news and media sites. They found that problematic online ads take many forms, and that they appeared equally on both trusted mainstream news sites and low quality sites devoted to peddling misinformation. In follow-up work, Roesner and her collaborators further studied people’s — not just researchers’ — perceptions of problematic ad content, and in forthcoming work, problematic political ads surrounding the 2020 U.S. elections.

“Right now, the web is the wild west of advertising. There is a lot of content that is misleading and potentially harmful, and it can be really difficult for users to tell the difference,” explained Roesner. “For example, ads may take the form of product ‘advertorials,’ in which their similarity to actual news articles lends them an appearance of legitimacy and objectivity. Or they might rely on manipulative or click-baity headlines that contain or imply disinformation. Sometimes, they are disguised as political opinion polls with provocative statements that, when you click on them, ask for your email address and sign you up for a mailing list that delivers you even more manipulative content.”

Roesner is keen to build on her previous work to improve our understanding of how these tactics enable problematic ads to proliferate — and the human toll that they generate in terms of the time and attention wasted and the emotional impact of consuming misinformation. Building out the team’s existing ad collection infrastructure, the ad archive will provide a structured, longitudinal, and (crucially) public look into the ads that people see on the web. These insights will support additional research from Roesner’s team as well as other researchers investigating how misinformation spreads online. Roesner and her collaborators ultimately aim to help “draw the line” between legitimate online advertising content and practices, and problematic content that is harmful to users, content creators, websites, and ad platforms.

But Roesner doesn’t think we should wait for the regulatory framework to catch up. One of her priorities is to protect users from problematic ads, such as by developing tools that automatically block certain ads or empower users to recognize and flag them. While acknowledging that online advertising is here to stay — it funds the economic model of the web, after all — Roesner believes that there is a better balance to be struck between revenue and the quality of content that people consume on a daily basis as they point and click.

“Even the most respected websites may be inadvertently hosting and assisting the spread of bogus content — which, as things stand, puts the onus on users to assess the veracity of what they are seeing,” said Roesner. “My hope is that this collaboration with Consumer Reports will support efforts to analyze ad content and its impact on users — and generate regulatory and technical solutions that will lead to more positive digital experiences for everyone.”

Consumer Reports created the Digital Lab Fellowship program with support from the Alfred P. Sloan Foundation and welcomed its first cohort last year. 

“People should feel safe with the products and services that fill our lives and homes. That depends on dedicated public interest technologists keeping up with the pace of innovation to effectively monitor the digital marketplace,” Ben Moskowitz, director of the Digital Lab at Consumer Reports, said in a press release. “We are proud to support and work alongside these three Fellows, whose work will increase fairness and trust in the products and services we use everyday.”

Read the Consumer Reports announcement here, and learn more about the Digital Lab Fellowship program here.

Congratulations, Franzi!

October 5, 2021

“There’s so much beauty in these tiny things”: Allen School’s Shyam Gollakota named 2021 Moore Inventor Fellow for advancing big ideas on a miniature scale

Portrait of Shyam Gollakota
Credit: Tara Gimmer

Allen School professor Shyam Gollakota has received a 2021 Moore Inventor Fellowship in recognition of his work at the nexus of low-power wireless communication, biology and living organisms. Gollakota, who directs the Allen School’s Networks & Mobile Systems Lab, is the first University of Washington faculty member to receive this prestigious award that nurtures the next generation of scientist-inventors. The Gordon and Betty Moore Foundation established the fellowship program aimed at supporting “50 inventors to shape the next 50 years” in 2016 to mark the 50th anniversary of Moore’s Law positing the exponential growth in computer processing power. 

Power figures prominently in Gollakota’s work — but in his case, the focus has been on finding ways to wirelessly fuel computation in order to cut the cord and lighten the load. Much like the results of the eponymous law articulated by Gordon Moore, Gollakota’s research has led to expanded computational capabilities accompanied by a shrinking form factor.

His first foray into wireless computing resulted in a breakthrough known as ambient backscatter. Together with his UW colleague Joshua Smith, who holds a joint appointment in the Allen School and the Department of Electrical & Computer Engineering, Gollakota developed a battery-free system that used television, WiFi and other wireless signals as both a power source and a mode of communication. In a series of subsequent projects, Gollakota and his collaborators expanded these capabilities to cover greater distances and bestow the capability to perform wireless computation on a greater variety of objects.

After looking skyward to enable devices to pull power out of thin air, Gollakota cast his eyes in the opposite direction as he contemplated how to make the most of these new capabilities.

“Outside there’s a whole world on every square foot, with living beings that you don’t even think about. We just walk over it,” Gollakota said in a UW News story. “But there’s so much happening — feats of engineering. There’s so much beauty in these tiny things.”

Gollakota and his colleagues drew inspiration from those “tiny things” to engineer a new line of research he has dubbed the Internet of Biological and Bio-Inspired Things. The concept began to take off with the development of a lightweight wireless sensor backpack small enough to be carried by bumblebees. The onboard sensors gather data about the surrounding environment as the bees go about their daily business; upon their return to the hive each evening, the data they logged is uploaded using backscatter while the tiny battery is wirelessly recharged for the next day’s flight. Gollakota and his collaborators followed up that buzz-worthy project with a wireless sensing package that could be safely air-dropped from great heights by live moths or drones into remote or impassable areas, followed by a miniature remote-control camera that can ride on the back of a beetle. Looking to the future, Gollakota is keen to explore ways to more deeply integrate biology and technology to achieve his vision.

Bumblebee wearing tiny sensor on its back collecting nectar from a flower
Credit: Mark Stone/University of Washington

“Simply put, Shyam is amazing — he is easily the most creative person I have ever met,” his Allen School colleague Thomas Anderson observed earlier this year. “He repeatedly invents and builds prototypes that, before you see them demonstrated, you would have thought impossible.”

While Gollakota’s notion of an Internet of Biological and Bio-Inspired Things may at first seem to belong in the realm of science fiction, it has many practical applications, from wildlife conservation, to smart agriculture, to large-scale environmental monitoring. In parallel with this work, Gollakota has also collaborated with colleagues and clinicians on a series of mobile sensing projects to support contactless disease detection and health monitoring using smartphones and smart speakers.

Gollakota is one of five innovators to be named in the 2021 cohort of Moore Inventor Fellows. He and his fellow honorees were selected from nearly 200 nominations received by the Foundation and will each receive $825,000 to further their inventions. Gollakota, who holds the Torode Family Career Development Professorship in the Allen School, previously earned the Association for Computing Machinery’s ACM Grace Murray Hopper Award in recognition of his early-career technical contributions, MIT Technology Review’s TR35 Award recognizing the world’s top innovators under the age of 35, and a Sloan Research Fellowship — among many other honors since his arrival at the UW in 2012.

Read the Moore Foundation announcement here, Gollakota’s Moore Inventor Fellow profile here, and a related UW News story here.

Congratulations, Shyam!

October 4, 2021

Not just phoning it in: Shwetak Patel honored by Georgia Tech and Business Insider for contributions to low-power sensing and mobile health innovation

Shwetak Patel with arm extended toward camera and color calibration card resting on forearm while a pair of hands extends from off camera holding a cell phone

University of Washington professor Shwetak Patel has earned a place in the Georgia Tech College of Computing’s Hall of Fame and a spot on Business Insider’s recent list of “30 leaders under 40” who are changing health care for his innovative work combining low-power sensing, signal processing and machine learning for applications ranging from non-invasive disease screening to monitoring appliance-level energy consumption. Patel, who holds the Washington Research Foundation Entrepreneurship Endowed Professorship in the Allen School and the UW Department of Electrical & Computer Engineering, is also a serial entrepreneur and director of health technologies at Google Health and Fitbit Research.

“Ten years ago, computing’s role in health care was basically billing, data collection and databases,” Patel observed to Business Insider. “Now computing is playing a critical role in the actual discovery of new interventions in health outcomes.” 

Patel himself has been largely responsible for transforming computing’s contribution from staid billing software to pocket-sized personal health monitor. Evidence of that work is strewn around the Allen School’s Ubiquitous Computing Lab on the UW’s Seattle campus, where stacks of mobile phones and coils of charger cable jostle for space with a variety of 3D-printed accessories, camera color correction cards, the odd blood pressure monitor, and even a life-sized plastic baby doll. (That last one is used for demonstrating an app for detecting infant jaundice.)

Patel’s drive to “democratize diagnostics,” as he once described it, stemmed from his realization that the proliferation of smartphones and their increasingly sophisticated sensing capabilities had the potential to improve health care outcomes for millions of people around the globe. He and his students began thinking about how they could employ these on-board sensors — such as the phone’s camera, microphone, accelerometer, and gyroscope — to augment traditional in-person care by enabling early detection and intervention. They also saw an opportunity to empower people to monitor their health on an ongoing basis, without the need for repeated trips to a clinic or access to specialized equipment, with the help of a device they already carry around with them.

Working with clinicians at UW Medicine, Seattle Children’s and others, Patel and his team developed apps for assessing lung function in people with respiratory illnesses, detecting jaundice in babies and adults, measuring blood hemoglobin in people with anemia — to name only a few. Patel and his collaborators started a company, Senosis Health, to commercialize their research. After Senosis was acquired by Google, Patel began splitting his time between the UW and the company in order to lead the latter’s mobile health efforts. Patel and the Google Fit team have since released tools for measuring heart rate and respiratory rate to permit users to monitor their general health and wellness with the aid of their smartphone camera that was based in part on research originating in his UW lab.

Following the emergence of SARS-CoV-2 early last year, Patel and his lab pivoted to applying what they had learned from their work on those earlier apps to focus on tools that could aid in the pandemic response. For example, Patel and his students have been working on smartphone-based tools for monitoring symptoms such as cough and developed a system to enable contactless measurement of a person’s vital signs via online video. To aid in the community-level response, he also collaborated on the creation of RDTScan to support accurate interpretation of rapid diagnostic test results at the point of capture with the help of a smartphone and demonstrated how air filtration systems on public transit could be used as passive sensing systems to detect viral spread.

Patel’s efforts to advance mobile health sensing were a natural progression from his visionary work on low-power sensing that stretches back to his student days at Georgia Tech. His first foray into the technology was as an undergraduate working on the Aware Home, a demonstration project that sought to imagine the connected home of the future. After earning his bachelor’s, Patel remained in Atlanta to pursue his doctorate, during which time he developed a system for measuring residential energy and water consumption by individual appliances and fixtures from a single point in the home — research that Patel continued to refine and expand upon following his arrival at the UW. He and his Georgia Tech collaborators started a company, Zensi, to commercialize that work which was subsequently acquired by Belkin. 

Next, Patel and his students zoomed out from looking at individual appliances to monitoring the entire home via an ultra-low-power sensing system known as SNUPI, short for Sensor Nodes Utilizing Powerline Infrastructure. SNUPI consisted of a network of low-power sensors that transmitted data about a building’s condition — for example, increased moisture level in the walls — via the structure’s electrical circuit. The system was designed to function for decades without having to replace the batteries. Patel and his team created another spinout company, SNUPI Technologies, to commercialize a residential whole-home hazards monitoring platform under the name of WallyHome that was later acquired by Sears. 

Through the years, Patel and his students have also advanced innovations in motion tracking, object detection, wearable technologies, hyperspectral imaging, and more. Throughout his career, he has earned more than two dozen Best Paper Awards and multiple “test of time” awards at the field’s preeminent conferences focused on ubiquitous computing, mobile computing, pervasive computing and human-computer interaction.

Patel’s induction into his alma mater’s Hall of Fame and the Business Insider recognition are the latest in a string of accolades recognizing the wide-ranging impact of his work. Previously, he was named a Fellow of the Association for Computing Machinery and received the organization’s ACM Prize in Computing for mid-career contributions to the field. Patel is also a past recipient of a MacArthur Foundation “Genius” Award and a Presidential Early Career Award for Scientists and Engineers (PECASE).

Read the Georgia Tech College of Computing Hall of Fame citation here, Patel’s “30 under 40” profile in Business Insider here, and a related article on his work on health care technologies at Google here (paywall).

Congratulations, Shwetak — times two!

Photo credit: Matt Hagen

September 28, 2021

“They were very unsure what to do with us”: How Golden Goose Award-winning researchers at UW and UCSD put the brakes on automobile cybersecurity threats and transformed an industry

Collage of portraits of Stephen Checkoway, Karl Koscher, Stefan Savage and Tadayoshi Kohno
UW and UCSD Golden Goose Award recipients (clockwise from top left): Stephen Checkoway, Karl Koscher, Stefan Savage and Tadayoshi Kohno

In 2010 and 2011, a team of researchers led by Allen School professor Tadayoshi Kohno and Allen School alumnus and University of California San Diego professor Stefan Savage (Ph.D., ‘02) published a pair of papers detailing how they were able to hack into a couple of Chevrolet Impalas and take command of a range of functions, from operating the windshield wipers to applying — or even denying — the brakes. A decade later, Kohno, Savage, and University of Washington alumni Karl Koscher (Ph.D., ‘14), now a research scientist in the Allen School’s Security & Privacy Research Lab, and Stephen Checkoway (B.S., ‘05), a UCSD Ph.D. student at the time who is now a faculty member at Oberlin College, have received the Golden Goose Award from the American Association for the Advancement of Science for demonstrating “how scientific advances resulting from foundational research can help respond to national and global challenges, often in unforeseen ways.”

“More than 10 years ago, we saw that devices in our world were becoming incredibly computerized, and we wanted to understand what the risks might be if they continued to evolve without thought toward security and privacy,” explained Kohno in a UW News release.

Achieving that understanding would go on to have significant real-world impact, influencing “how products are built and how policies are written,” noted Savage. It would also transform not just the automobile manufacturing landscape, but the computer security research landscape as well. 

“The entire automotive security industry grew from this effort,” recalled Kohno. “And I imagine that neighboring industries saw what happened here and didn’t want something similar happening to them.”

“What happened here” was that Kohno and his colleagues demonstrated how a motor vehicle’s computerized systems could be vulnerable to attackers, theoretically endangering the car’s occupants and those who share the road with them. The quartet was aided and abetted by collaborators that included, on the Allen School side, then-student and current professor Franziska Roesner (Ph.D., ‘14), fellow student Alexei Czeskis (Ph.D., ‘13), and professor Shwetak Patel; on the UCSD side, they were joined by postdoc Damon McCoy, master’s student Danny Anderson, professor Hovav Shacham, and the late researcher Brian Kantor.

This “dream team,” as Kohno describes it, set out to reverse-engineer the various vehicle components. The goal was to figure out how they communicated with each other so that they could use that to gain access to the systems that control the vehicle’s functions. The researchers published two papers in rapid succession detailing their findings; the first established how a car’s internal systems were vulnerable to compromise, while the follow-up explored the external attack surface of the vehicle by demonstrating how an attacker could infiltrate and control those systems remotely. The team presented the former at the 2010 IEEE Symposium on Security & Privacy and the latter at the 2011 USENIX Security Symposium.

In a way, Savage recalled, the researchers’ ignorance about how the vehicle’s systems were actually designed to work ended up working to the team’s advantage; it enabled them to approach their task without any preconceptions of what should happen. An example is the brake controller, which they broke into via a technique known as black-box testing or “fuzzing.” As the label suggests, these efforts involved less precision and more “throwing stuff at it,” according to Savage, to see what would stick. The results were enough to stop anyone in their tracks — including the technical experts at GM.

“We figured out ways to put the brake controller into this test mode,” Koscher explained to UW News. “And in the test mode, we found we could either leak the brake system pressure to prevent the brakes from working or keep the system fully pressurized so that it slams on the brakes.”

As the senior Ph.D.s on the project, Koscher and Checkoway spearheaded that discovery, which involved calling into the car’s OnStar unit and instructing it to download and install remote command and control software that they had written. With that in place, they were able to compel the system to download the software that would enable them to remotely control the brakes from a laptop — as demonstrated later in a famous “60 Minutes” segment in which the team surprised correspondent Leslie Stahl by bringing the car to a complete stop while she was behind the wheel.

While that made for good television, what is most gratifying for the researchers are the industry and regulatory frameworks that grew out of their discovery. 

For example, GM — along with other manufacturers — hired an entire security team as a direct result of the UW and UCSD research; likewise, the National Highway Traffic Safety Administration (NHTSA) — which previously had no one on staff with computer security expertise and “were very unsure what to do with us,” according to Savage — wound up creating an entire unit devoted to cybersecurity, complete with its own testing lab. In other positive changes, the Society of Automotive Engineers — later renamed SAE International — established a set of security standards that all automobile manufacturers adhere to, and the industry created the Auto-ISAC, a national Information Sharing and Analysis Center, to enhance vehicle cybersecurity and address emerging threats.

The team’s work also paved the way for new research outside of the automotive industry. For example, its results inspired the U.S. Defense Advanced Research Projects Agency (DARPA) to create its HACMS project, short for High-Assurance Cyber Military Systems, to examine the security of cyber-physical systems. And that was just the start.

“My gut tells me that the attention directed at this project helped to build up expertise in this embedded systems realm,” observed Koscher. “What was initially focused on automotive security was then applied to other industries, such as medical devices.”

The project also served to highlight the advantages of working as part of a larger team, as Checkoway discovered to his delight. While various members of the group may have approached a problem from different angles, they would often meet in the middle to come up with a solution.

“This was an extremely collaborative effort,” Checkoway explained last year. “No task was performed by an individual researcher alone. I believe our close collaboration was the key to our success.”

At the time the researchers quietly revealed their results to GM, they couldn’t be sure such a happy outcome was a foregone conclusion. At first, the company representatives didn’t believe they could do some of the things they had done — or how they could have possibly done them. But the team’s non-adversarial approach, in which they opted to walk company representatives through their process and findings while refraining from naming the manufacturer publicly, went a long way toward steering the conversations in a positive direction.

“As academics, we have the opportunity to approach the dialogue around vulnerabilities without really having a stake in the game,” explained Kohno. “We’re not selling vulnerabilities, we’re not selling a product to patch vulnerabilities, and we aren’t a competing manufacturer. So we discovered something, and once we had the results, we wanted to figure out, how can we use this knowledge to make the world a better place?”

The team is quick to credit the federal government for driving investment in a project for which they didn’t have a precise destination in mind when they started. According to Savage, the National Science Foundation’s willingness to back a project that was not guaranteed to pan out was key to enabling them to identify these latent security risks. “We’re extremely grateful to NSF for having flexibility to fund this work that was so speculative and off the beaten path,” Savage said.

Two men wearing masks standing next to car, one is typing on laptop set on car roof
Checkoway (left) and Koscher reunite with Emma the Impala in the UW’s Central Garage Mark Stone/University of Washington

It is just the kind of work that the Golden Goose Award was created to recognize. In answer to the late U.S. Senator William Proxmire’s “Golden Fleece Award” ridiculing federal investment that he deemed wasteful, U.S. Representative Jim Cooper conceived of the Golden Goose Award to honor “the tremendous human and economic benefits of federally funded research by highlighting examples of seemingly obscure studies that have led to major breakthroughs and resulted in significant societal impact.”

For Kohno, that impact and this most recent recognition — the team previously earned a Test of Time Award from the IEEE Computer Society Technical Committee on Security and Privacy — are motivation enough to explore where the next security risk may come from. 

“The question that I have now is, as security researchers, what should we be investigating today, such that we have the same impact in the next 10 years?”

The team was formally honored in a virtual ceremony hosted by the AAAS last week. Read the feature story on the UW and UCSD team here, the UW News release here and a related story by Oberlin College here. Learn about all of the 2021 Golden Goose honorees here.

September 27, 2021

A snapshot of the future: Computing goes molecular with DNA-based similarity search from University of Washington and Microsoft researchers

Collection of 20 close-up photos of cats in varying poses, arranged in a grid
Forget social media memes — searching for cat photos is serious business for members of the Molecular Information Systems Lab.

Picture this: You are a researcher in the Molecular Information Systems Lab housed at the Paul G. Allen School. You and your labmates have developed a system for storing and retrieving digital images in synthetic DNA to demonstrate how this extremely dense and durable medium might be used to preserve the world’s growing trove of data. And you have a particular fondness for cats.

If you wanted to sift through photos of frisky felines — and really, what better way to spend an afternoon — how would you pick out the relevant files floating around in your test-tube database without having to sequence the entire pool?

In a paper published recently in the journal Nature Communications, a team of MISL scientists presented the first technique for performing content-based similarity search among digital image files stored in DNA molecules. The approach is akin to that of a modern search engine, albeit in a much smaller form factor than your average server farm and with the potential to be much more energy efficient.

”Content-based search enables us to type a word or phrase into a hundred-page document and be taken to the exact page it appears, or upload a photo of a daisy and get flower images in return,” said co-author Yuan-Jyue Chen, senior researcher at Microsoft and an affiliate professor at the Allen School. “We don’t know the specific page numbers or files we’re looking for, so that computation saves us the trouble of reading the entire document or scrolling through every photo on the internet. Our team took that same idea and applied it to data stored in molecular form.”

Files, whether stored in digital or molecular form, make use of a process known in database parlance as key-based retrieval. In an electronic database, it is typically a file name; in a molecular database, it is a unique sequence, reminiscent of a barcode, that is encoded in the snippets of DNA associated with a particular file. Items with this barcode can be amplified via polymerase chain reaction (PCR) to reassemble a file in its entirety, since a single digital file might be split among hundreds — possibly even thousands — of DNA oligonucleotides, depending on its total size. Generally speaking, key-based retrieval works great when you know the contents of the files and can pick out which ones you want to retrieve; if you don’t and your data is stored as the As, Ts, Cs and Gs of DNA instead of 0s and 1s, the entire database would have to be sequenced in order to perform a content-based search.

Group shot of team members standing in front of metal railing in light-filled atrium constructed of glass, concrete and brick
Left to right: Luis Ceze, Yuan-Jyue Chen, Callista Bee, Karin Strauss, David Ward and Aaron Liu. Tara Brown Photography

To move beyond the limitations of key-based retrieval, the researchers leveraged DNA’s natural hybridization behavior along with machine learning techniques to enable similarity search to be performed on the stored data. In a conventional digital database, similarity search relies on a set of feature vectors that are stored separately from the original data. When a search is executed, its results point to the location of each data file associated with a particular feature vector. For the molecular version, the MISL team developed an image-to-sequence encoding scheme that employs a convolutional neural network to designate image feature vectors as “similar” or “not similar” and then maps them to DNA sequences that will predictably hybridize — or bind — with the reverse complement of a query feature vector processed by the same neural network during execution of a search. The technique can be applied to new images not seen during training, and the entire process is easily extended to other types of data such as videos and text.

The researchers created an experimental database by running a collection of 1.6 million images through their encoder, which converted them to DNA sequences incorporating their feature vectors, and tacked on a unique barcode identifier for each file. They then performed a similarity search for three photos — including one of a tuxedo cat named Janelle — using the reverse complement of each query image’s encoded feature sequence against a sample of the database. After filtering out the hybridized target/query pairs for high-throughput sequencing, they found the most frequently sequenced oligos did, indeed, corresponded to images in the database that were visually similar to the query images.

The team found that its molecular-based approach was comparable to that of in silico algorithms representing the state of the art in similarity search. Unlike those algorithms, however, the team points out that DNA-based search has the potential to scale to significantly larger databases without a correspondingly significant increase in processing time and energy consumption due to its inherently parallel nature. In this way, researchers have barely scratched the surface of what DNA computing can do.

Portraits of Lee Organick, Melissa Queen and Georg Seelig
Left to right: Lee Organick, Melissa Queen and Georg Seelig. Tara Brown Photography

“As DNA data storage is made more practical by advances in fast and low-cost synthesis, sequencing and automation, the ability to perform computation within these databases will pave the way for hybrid molecular-electronic computer systems. Starting with similarity search is exciting because that is a popular primitive in machine learning systems, which are quickly becoming pervasive,” Allen School professor and co-corresponding author Luis Ceze said.

In addition to Ceze and Chen, contributors to the paper include lead author and Allen School alumna Callista Bee (Ph.D., ‘20), Ph.D. students Melissa Queen and Lee Organick, former research scientist Xiaomeng (Aaron) Liu, lab manager David Ward, Allen School and UW Electrical & Computer Engineering professor Georg Seelig, and co-corresponding author Karin Strauss, an affiliate professor in the Allen School and senior principal research manager at Microsoft. Bee initially presented the team’s proof of concept and precursor to this work at the 24th International Conference on DNA Computing and Molecular Programming (DNA 24), for which she earned a Best Student Paper Award.

Read the team’s latest paper, “Molecular-level similarity search brings computing to DNA data storage,” in Nature Communications.

August 31, 2021

Allen School’s Dhruv Jain wins Microsoft Research Dissertation Grant for his work leveraging HCI and AI to advance sound accessibility

Dhruv Jain smiling in front of glass and wood cabinet

Allen School Ph.D. student Dhruv (DJ) Jain has received a Microsoft Research Dissertation Grant for his work on “Sound Sensing and Feedback Techniques for Deaf and Hard of Hearing Users.” This highly selective grant aims to increase diversity in computing by supporting doctoral students who are underrepresented in the field to “cross the finish line” during the final stages of their dissertation research. 

Jain, who is co-advised by Allen School professor Jon Froehlich and Human Centered Design & Engineering professor and Allen School adjunct professor Leah Findlater, works in the Makeability Lab to advance sound accessibility by designing, building and deploying systems that leverage human computer interaction (HCI) and artificial intelligence (AI). His primary aim is to help people who are d/Deaf and hard of hearing (DHH) to receive important and customized sound feedback. The dissertation grant will support Jain’s continuing work on the design and evaluation of three of these systems. 

One of his projects, HomeSound, is a smart home system that senses and visualizes sound activity like the beeping of a microwave, blaring of a smoke alarm or barking of a dog in different rooms of a home. It consists of a microphone and visual display, which could be either a screen or a smartwatch, with several devices installed throughout the premises. Another system, SoundWatch, is an app that provides always-available sound feedback on smartwatches. When the app picks up a nearby sound like a car honking, a bird chirping or someone hammering, it sends the user a notification along with information about the sound. Jain also has contributed to the development of HoloSound, an augmented reality head-mounted display system that uses deep learning to classify and visualize the identity and location of sounds in addition to providing speech transcription of nearby conversations. All three projects are currently being deployed and tested with DHH users.

“Dhruv is a dedicated researcher who draws on his own unique life experiences to design and build interactive systems for people who are deaf or hard of hearing,” Froehlich said. “DJ cares not just about academic results and solving hard technical problems but in pushing towards deployable solutions with real-world impact. SoundWatch is a great example: DJ helped lead a team in building a real-time sound recognizer in the lab that they then translated to an on-watch system deployed on the Google Play Store. Thus far, it’s been downloaded over 500 times.”

Over the course of his research, Jain has published 20 papers at top HCI venues such as the Association for Computing Machinery’s Conference on Human Factors in Computing Systems (CHI), Symposium on User Interface Software and Technology (UIST), Designing Interactive Systems Conference (DIS) and Conference on Computers and Accessibility (ASSETS). His work has received two Best Paper Awards, three Best Paper Honorable Mentions and one Best Artifact Award. 

Learn more about the 2021 grant recipients here.

Congratulations, DJ!

August 18, 2021

University of Washington and Microsoft researchers develop “nanopore-tal” enabling cells to talk to computers

Two people wearing masks and gloves in a lab. One is sitting at a table piping cell culture onto a small rectangular device connected to a laptop, surrounded by various lab supplies. Second person is standing behind first person observing.
MISL researcher Nicolas Cardozo pipes cell cultures containing NanoporeTERs onto a portable MinION nanopore sensing device for processing as professor Jeff Nivala looks on. Dennis Wise/University of Washington

Genetically encoded reporter proteins have been a mainstay of biotechnology research, allowing scientists to track gene expression, understand intracellular processes and debug engineered genetic circuits. But conventional reporting schemes that rely on fluorescence and other optical approaches come with practical limitations that could cast a shadow over the field’s future progress. Now, thanks to a team of researchers at the University of Washington and Microsoft, scientists are about to see reporter proteins in a whole new light. 

In a paper published today in the journal Nature Biotechnology, members of the Molecular Information Systems Laboratory housed at the UW’s Paul G. Allen School of Computer Science & Engineering introduce a new class of reporter proteins that can be directly read by a commercially available nanopore sensing device. The new system ― dubbed “Nanopore-addressable protein Tags Engineered as Reporters,” also known as NanoporeTERs or NTERs for short ― can perform multiplexed detection of protein expression levels from bacterial and human cell cultures far beyond the capacity of existing techniques. 

You could say the new system offers a “nanopore-tal” into what is happening inside these complex biological systems where, up until this point, scientists have largely been operating in the dark.

“NanoporeTERs offer a new and richer lexicon for engineered cells to express themselves and shed new light on the factors they are designed to track. They can tell us a lot more about what is happening in their environment all at once,” said co-lead author Nicolas Cardozo, a graduate student in the UW’s molecular engineering Ph.D. program. “We’re essentially making it possible for these cells to ‘talk’ to computers about what’s happening in their surroundings at a new level of detail, scale and efficiency that will enable deeper analysis than what we could do before.” 

Laptop screen showing squiggly lines of various colors stacked on top of each other, representing signals from a nanopore sensing device
Raw nanopore signals streaming from the MinION device, which contains an array of hundreds of nanopore sensors; each color represents data from an individual nanopore. The team uses machine learning to interpret these signals as NanoporeTERs barcodes. Dennis Wise/University of Washington

Conventional methods that employ optical reporter proteins, such as green fluorescent protein (GFP), are limited in the number of distinct genetic outputs that they can track simultaneously due to their overlapping spectral properties. For example, it’s difficult to distinguish between more than three different fluorescent protein colors, limiting multiplexed reporting to a maximum of three outputs. In contrast, NTERs were designed to carry distinct protein “barcodes” composed of strings of amino acids that, when used in combination, enable a degree of multiplexing approaching an order of magnitude more. These synthetic proteins are secreted outside of the cell into the surrounding environment, where they are collected and directly analyzed using a commercially available nanopore array — in this case, the Oxford Nanopore Technologies MinION device. To make nanopore analysis possible, the NTER proteins were engineered with charged “tails” that get pulled into the tiny nanopore sensors by an electric field. Machine learning is then used to classify their electrical signals in order to determine the output levels of each NTER barcode.

“This is a fundamentally new interface between cells and computers,” explained Allen School research professor and corresponding author Jeff Nivala. “One analogy I like to make is that fluorescent protein reporters are like lighthouses, and NanoporeTERs are like messages in a bottle. Lighthouses are really useful for communicating a physical location, as you can literally see where the signal is coming from, but it’s hard to pack more information into that kind of signal. A message in a bottle, on the other hand, can pack a lot of information into a very small vessel, and you can send many of them off to another location to be read. You might lose sight of the precise physical location where the messages were sent, but for many applications that’s not going to be an issue.”

In developing this new, more expressive vessel, Nivala and his colleagues eschewed time-consuming sample preparation or the need for other specialized laboratory equipment to minimize both latency and cost. The NTERs scheme is also highly extensible. As a proof of concept, the team developed a library of more than 20 distinct tags; according to co-lead author Karen Zhang, the potential is significantly greater.

Four people standing posed against a glass and metal railing in light-filled atrium.
Co-authors of the Nature Biotechnology paper (left to right): Karen Zhang, Nicolas Cardozo, Kathryn Doroschak and Jeff Nivala. Not pictured: Aerilynn Nguyen, Zoheb Siddiqui, Nicholas Bogard, Karin Strauss and Luis Ceze. Tara Brown Photography

“We are currently working to scale up the number of NTERs to hundreds, thousands, maybe even millions more,” Zhang, who graduated this year from the UW with bachelor’s degrees in biochemistry and microbiology, explained. “The more we have, the more things we can track. We’re particularly excited about the potential in single-cell proteomics, but this could also be a game-changer in terms of our ability to do multiplexed biosensing to diagnose disease and even target therapeutics to specific areas inside the body. And debugging complicated genetic circuit designs would become a whole lot easier and much less time consuming if we could measure the performance of all the components in parallel instead of by trial and error.”

MISL researchers have made novel use of the ONT MinION device before. Allen School alumna Kathryn Doroschak (Ph.D., ‘21), one of the lead co-authors of this paper, was also involved in an earlier project in which she and her teammates developed a molecular tagging system to replace conventional inventory control methods. That system relied on barcodes comprising synthetic strands of DNA that could be decoded on demand using the portable ONT reader. This time, she and her colleagues went a step further in demonstrating how versatile such devices can be.

“This is the first paper to show how a commercial nanopore sensor device can be repurposed for applications other than the DNA and RNA sequencing for which they were originally designed,” explained Doroschak. “This is exciting as a precursor for nanopore technology becoming more accessible and ubiquitous in the future. You can already plug a nanopore device into your cell phone; I could envision someday having a choice of ‘molecular apps’ that will be relatively inexpensive and widely available outside of traditional genomics.”

Additional co-authors of the paper include research assistants Aerilynn Nguyen and Zoheb Siddiqui, former postdoc Nicholas Bogard, Allen School affiliate professor Karin Strauss, senior principal research manager at Microsoft; and Allen School professor Luis Ceze.

Read the paper, “Multiplexed direct detection of barcoded protein reporters on a nanopore array,” in Nature Biotechnology.

Editor’s note: Team photo was taken pre-pandemic.

August 12, 2021

Living on the edge: Allen School’s Sewoong Oh aims to advance distributed artificial intelligence for wireless networks as part of new $20 million NSF AI Institute

Sewoong Oh standing with hands on railing

In its latest round of funding intended to strengthen the United States of America’s leadership in artificial intelligence research, the National Science Foundation today designated a new NSF AI Institute for Future Edge Networks and Distributed Intelligence (AI-EDGE) that brings together 30 researchers from 18 universities, industry partners and government labs. Allen School professor Sewoong Oh is among the institute researchers who will spearhead the development of new AI tools and techniques to advance the design of next-generation wireless edge networks. The focus of AI-EDGE, which is led by The Ohio State University, will be on ensuring such networks are efficient, robust and secure.

Among the exciting new avenues Oh and his colleagues are keen to explore is the creation of tools that will enable wireless edge networks to be both self-healing and self-optimizing in response to changing network conditions. The team’s work will support future innovations in a variety of domains, from telehealth and transportation to robotics and aerospace.

“The future is wireless, which means much of the growth in devices and applications will be focused at the network edge rather than in the traditional network core,” Oh said. “There is tremendous benefit to be gained by building new AI tools tailored to such a distributed ecosystem, especially in making these networks more adaptive, reliable and resilient.”

AI-EDGE, which will receive $20 million in federal support over five years, is partially funded by the Department of Homeland Security. It is one of 11 new AI research institutes announced by the NSF today — including the NSF AI Institute for Dynamic Systems led by the University of Washington.

“These institutes are hubs for academia, industry and government to accelerate discovery and innovation in AI,” said NSF Director Sethuraman Panchanathan in the agency’s press release. “Inspiring talent and ideas everywhere in this important area will lead to new capabilities that improve our lives from medicine to entertainment to transportation and cybersecurity and position us in the vanguard of competitiveness and prosperity.”

Oh expects there will be synergy between the work of the new AI-EDGE Institute and the NSF AI Institute for Foundations in Machine Learning unveiled last summer to address fundamental challenges in machine learning and maximize the impact of AI on science and society. As co-PI of IFML, he works alongside Allen School colleagues Byron Boots, Sham Kakade and Jamie Morgenstern and adjunct faculty member Zaid Harchaoui, a professor in the UW Department of Statistics, in collaboration with lead institution University of Texas at Austin and other academic and industry partners to advance the state of the art in deep learning algorithms, robot navigation, and more. In addition to tackling important research questions with real-world impact, AI-EDGE and IFML also focus on advancing education and workforce development to broaden participation in the field.

Read the NSF’s latest announcement here, the UW News release here and The Ohio State University press release here. Learn more about the NSF National AI Research Institutes program here.

July 29, 2021

Rajalakshmi Nandakumar wins SIGMOBILE Doctoral Dissertation Award for advancing wireless sensing technologies that address societal challenges

Rajalakshmi Nandakumar

Allen School alumna Rajalakshmi Nandakumar (Ph.D., ‘20), now a faculty member at Cornell University, received the SIGMOBILE Doctoral Dissertation Award from the Association for Computing Machinery’s Special Interest Group on Mobility of Systems Users, Data, and Computing “for creating an easily-deployed technique for low-cost millimeter-accuracy sensing on commodity hardware, and its bold and high-impact applications to important societal problems.” Nandakumar completed her dissertation, “Computational Wireless Sensing at Scale,” working with Allen School professor Shyam Gollakota in the University of Washington’s Networks & Mobile Systems Lab.

In celebrating Nandakumar’s achievements, the SIGMOBILE award committee highlighted “the elegance and simplicity” of her approach, which turns wireless devices such as smartphones into active sonar systems capable of accurately sensing minute changes in a person’s movements. The committee also heralded her “courage and strong follow-through” in demonstrating how her technique can be applied to real-world challenges — including significant public health issues affecting millions of people around the world.

Among the contributions Nandakumar presented as part of her dissertation was ApneaApp, a smartphone-based system for detecting a potentially life-threatening condition, obstructive sleep apnea, that affects an estimated 20 million people just in the United States alone. Unlike the conventional approach to diagnosing apnea, which involves an overnight stay in a specialized lab, the contactless solution devised by Nandakumar and her Allen School and UW Medicine collaborators could be deployed in the comfort of people’s homes. ApneaApp employs the phone’s speaker and microphone to detect changes in a person’s breathing during sleep, without requiring any specialized hardware. It works by emitting inaudible acoustic signals that are then reflected back to the device and analyzed for deviations in the person’s chest and abdominal movements. ResMed subsequently licensed the technology and made it available to the public for analyzing sleep quality via its SleepScore app.

Her early work on contactless sleep monitoring opened Nandakumar’s eyes to the potential for expanding the use of smartphone-based sonar to support early detection and intervention in the case of another burgeoning public health concern: preventable deaths via accidental opioid overdose. This led to the development of Second Chance, an app that a person activates on their smartphone to unobtrusively monitor changes in their breathing and posture that may indicate the onset of an overdose. Catching these early warning signs as soon as they occur would enable the timely administration of life-saving naloxone. Nandakumar’s colleagues created a startup, Sound Life Sciences, to commercialize this and related work that employs sonar to detect and monitor a variety of medical conditions via smart devices. 

The SIGMOBILE Dissertation Award is the latest in a string of honors recognizing Nandakumar for her groundbreaking contributions in wireless systems research. She previously earned a Paul Baran Young Scholar Award from the Marconi Society, a Graduate Innovator Award from UW CoMotion, and a Best Paper Award at SenSys 2018.

“Rajalakshmi is brilliant, creative and fearless in her research. She repeatedly questions conventional wisdom and takes a different path from the rest of the community,” said Allen School professor Ed Lazowska, who supported Nandakumar’s nomination. “Her work points to the future — a future in which advances in computer science and computer engineering will have a direct bearing on our capacity to tackle societal challenges such as health and the environment. Rajalakshmi is a game-changer.”

Way to go, Rajalakshmi!

Photo credit: Sarah McQuate/University of Washington

July 23, 2021

Gray sheep, golden cows, and everything in between: Yejin Choi earns Longuet-Higgins Prize in computer vision for enabling more precise image captions via natural language generation

Sheep standing in glass and metal bus shelter by road
“The gray sheep is by the gray road”

Allen School professor Yejin Choi is among a team of researchers recognized by the Computer Vision Foundation with its 2021 Longuet-Higgins Prize for their paper “Baby talk: Understanding and generating simple image descriptions.” The paper was among the first to explore the new task of generating image captions in natural language by bridging two fields of artificial intelligence: computer vision and natural language processing. Choi, who is also a senior research manager at the Allen Institute for AI (AI2), completed this work while a faculty member at Stony Brook University. She and her co-authors originally presented the paper at the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

Baby talk is the process by which adults assist infants in acquiring language and building their understanding of the world that is characterized in part by the use of grammatically simplified speech. Drawing upon this concept, Choi and her collaborators set out to teach machines to generate simple yet original sentences describing what they “see” in a given image. This was a significant departure from conventional approaches grounded in the retrieval and summarization of pre-existing content. To move past the existing paradigm, the researchers constructed statistical models for visually descriptive language by mining and parsing the large quantities of text available online and paired them with the latest recognition algorithms. Their strategy enabled the new system to describe the content of an image by generating sentences specific to that particular image, as opposed to requiring it to shoehorn content drawn from a limited document corpus into a suitable description. The resulting captions, the team noted, had greater relevance and precision in the way they describe the visual content.

Yejin Choi
Yejin Choi

“At the time we did this work, the question of how to align the semantic correspondences or alignments across different modalities, such as language and vision, was relatively unstudied. Image captioning is an emblematic task to bridge the longstanding gap between NLP research with computer vision,” explained Choi. “By bridging this divide, we were able to generate richer visual descriptions that were more in line with how a person might describe visual content — such as their tendency to include not just information on what objects are pictured, but also where they are in relation to each other.” 

This incorporation of spatial relationships into their language generator was key in producing more natural-sounding descriptions. Up to that point, computer vision researchers who focused on text generation from visual content relied on spatial relationships between labeled regions of an image solely to improve labeling accuracy; they did not consider them outputs in their own right on a par with objects and modifiers. By contrast, Choi and her colleagues considered the relative positioning of individual objects as integral to developing the computer vision aspect of their system, to the point of using these relationships to drive sentence generation in conjunction with the depicted objects and their modifiers.

Some of the results were deemed to be “astonishingly good” by the human evaluators. In one example presented in the paper, the system accurately described a “gray sheep” as being positioned “by the gray road”; the “gray sky,” it noted, was above said road. For another image, the system correctly pegged that the “wooden dining table” was located “against the first window.” The system also accurately described the attributes and relative proximity of rectangular buses, shiny airplanes, furry dogs, and a golden cow — among other examples.

Cow with curved horns and shaggy, golden-brown hair standing in a field with trees in the background
The golden cow

The Longuet-Higgins Prize is an annual “test of time” award presented during CVPR by the IEEE Pattern Analysis and Machine Intelligence (PAMI) Technical Committee to recognize fundamental contributions that have had a significant impact in the field of computer vision. Choi’s co-authors on this year’s award-winning paper include then-master’s students Girish Kulkarni, Visruth Premraj and Sagnik Dhar; Ph.D. student SiMing Li; and professors Alexander C. Berg and Tamara L. Berg, both now on the faculty at University of North Carolina Chapel Hill.

Read the full paper here.

Congratulations to Yejin and the entire team!

July 22, 2021

Older Posts »