Skip to main content

Uncovering secrets of the “black box”: Pedro Domingos, author of “The Master Algorithm,” shares new work examining the inner workings of deep learning models

Pedro Domingos leaning on a stairwell railing
Pedro Domingos in the Paul G. Allen Center for Computer Science & Engineering. In his latest work, Domingos lifts the lid on the black box of deep learning. Dennis Wise/University of Washington

Deep learning has been immensely successful in recent years, spawning a lot of hope and generating a lot of hype, but no one has really understood why it works. The prevailing wisdom has been that deep learning is capable of discovering new representations of the data, rather than relying on hand-coded features like other learning algorithms do. But because deep networks are black boxes — what Allen School professor emeritus Pedro Domingos describes as “an opaque mess of connections and weights” — how that discovery actually happens is anyone’s guess.

Until now, that is. In a new paper posted on the preprint repository arXiv, Domingos gives us a peek inside that black box and reveals what is — and just as importantly, what isn’t — going on inside. Read on for a Q&A with Domingos on his latest findings, what they mean for our understanding of how deep learning actually works, and the implications for researchers’ quest for a “master algorithm” to unify all of machine learning.

You lifted the lid off the so-called black box of deep networks, and what did you find?

Pedro Domingos: In short, I found that deep networks are not as unintelligible as we thought, but neither are they as revolutionary as we thought. Deep networks are learned by the backpropagation algorithm, an efficient implementation for neural networks of the general gradient descent algorithm that repeatedly tweaks the network’s weights to make its output for each training input better match the true output. That process helps the model learn to label an image of a dog as a dog, and not as a cat or as a chair, for instance. This paper shows that all gradient descent does is memorize the training examples, and then make predictions about new examples by comparing them with the training ones. This is actually a very old and simple type of learning, called similarity-based learning, that goes back to the 1950s. It was a bit of a shock to discover that, more than half a century later, that’s all that is going on in deep learning!

Deep learning has been the subject of a lot of hype. How do you think your colleagues will respond to these findings?

PD: Critics of deep learning, of which there are many, may see these results as showing that deep learning has been greatly oversold. After all, what it does is, at heart, not very different from what 50-year-old algorithms do — and that’s hardly a recipe for solving AI! The whole idea that deep learning discovers new representations of the data, rather than relying on hand-coded features like previous methods, now looks somewhat questionable — even though it has been deep learning’s main selling point.

Conversely, some researchers and fans of deep learning may be reluctant to accept this result, or at least some of its consequences, because it goes against some of their deepest beliefs (no pun intended). But a theorem is a theorem. In any case, my goal was not to criticize deep learning, which I’ve been working in since before it became popular, but to understand it better. I think that, ultimately, this greater understanding will be very beneficial for both research and applications in this area. So my hope is that deep learning fans will embrace these results.

So it’s a good news/bad news scenario for the field? 

PD: That’s right. In “The Master Algorithm,” I explain that when a new technology is as pervasive and game-changing as machine learning has become, it’s not wise to let it remain a black box. Whether you’re a consumer influenced by recommendation algorithms on Amazon, or a computer scientist building the latest machine learning model, you can’t control what you don’t understand. Knowing how deep networks learn gives us that greater measure of control.

So, the good news is that it is now going to be much easier for us to understand what a deep network is doing. Among other things, the fact that deep networks are just similarity-based algorithms finally helps to explain their brittleness, whereby changing an example just slightly can cause the network to make absurd predictions. Up until now, it has puzzled us why a minor tweak would, for example, lead a deep network to suddenly start labeling a car as an ostrich. If you’re training a model for a self-driving car, you probably don’t want to hit either, but for multiple reasons — not least, the predictability of what an oncoming car might do compared to an oncoming ostrich — I would like the vehicle I’m riding in to be able to tell the difference. 

But these findings could be considered bad news in the sense that it’s clear there is not much representation learning going on inside these networks, and certainly not as much as we hoped or even assumed. How to do that remains a largely unsolved problem for our field.

If they are essentially doing 1950s-style learning, why would we continue to use deep networks?

PD: Compared to previous similarity-based algorithms such as kernel machines, which were the dominant approach prior to the emergence of deep learning, deep networks have a number of important advantages. 

One is that they allow incorporating bits of knowledge of the target function into the similarity measure — the kernel — via the network architecture. This is advantageous because the more knowledge you incorporate, the faster and better you can learn. This is a consequence of what we call the “no free lunch” theorem in machine learning: if you have no a priori knowledge, you can’t learn anything from data besides memorizing it. For example, convolutional neural networks, which launched the deep learning revolution by achieving unprecedented accuracy on image recognition problems, differ from “plain vanilla” neural networks in that they incorporate the knowledge that objects are the same no matter where in the image they appear. This is how humans learn, by building on the knowledge they already have. If you know how to read, then you can learn about science much faster by reading textbooks than by rediscovering physics and biology from scratch.

Another advantage to deep networks is that they can bring distant examples together into the same region, which makes learning more complex functions easier. And through superposition, they’re much more efficient at storing and matching examples than other similarity-based approaches.

Can you describe superposition for those of us who are not machine learning experts?

PD: Yes, but we’ll have to do some math. The weights produced by backpropagation contain a superposition of the training examples. That is, the examples are mapped into the space of variations of the function being learned and then added up. As a simple analogy, if you want to compute 3 x 5 + 3 x 7 + 3 x 9, it would be more efficient to instead compute 3 x ( 5 + 7 + 9) = 3 x 21. The 5, 7 and 9 are now “superposed” in the 21, but the result is still the same as if you separately multiplied each by 3 and then added the results.

The practical result is that deep networks are able to speed up learning and inference, making them more efficient, while reducing the amount of computer memory needed to store the examples. For instance, if you have a million images, each with a million pixels, you would need on the order of terabytes to store them. But with superposition, you only need an amount of storage on the order of the number of weights in the network, which is typically much smaller. And then, if you want to predict what a new image contains, such as a cat, you need to cycle through all of those training images and compare them with the new one. That can take a long time. With superposition, you just have to pass the image through the network once. That takes much less time to execute. It’s the same with answering questions based on text; without superposition, you’d have to store and look through the corpus, instead of a compact summary of it.

So your findings will help to improve deep learning models?

PD: That’s the idea. Now that we understand what is happening when the aforementioned car suddenly becomes an ostrich, we should be able to account for that brittleness in the models. If we think of a learned model as a piece of cheese and the failure regions as holes in that cheese, we now understand better where those holes are, and what their shape and size is. Using this knowledge, we can actively figure out where we need new data or adjustments to the model to fix the holes. We should also improve our ability to defend against attacks that cause deep networks to misclassify images by tweaking some pixels such that they cause the network to fall into one of those holes. An example would be attempts to fool self-driving cars into misrecognizing traffic signs.

What are the implications of your latest results in the search for the master algorithm?

PD: These findings represent a big step forward in unifying the five major machine learning paradigms I described in my book, which is our best hope for arriving at that universal learner, what I call the “master algorithm.” We now know that all learning algorithms based on gradient descent — including but not limited to deep networks — are similarity-based learners. This fact serves to unify three of the five paradigms: neural, probabilistic, and similarity-based learning. Tantalizingly, it may also be extensible to the remaining two, symbolic and genetic learning.

Given your findings, what’s next for deep learning? Where does the field go from here?

PD: I think deep learning researchers have become too reliant on backpropagation as the near-universal learning algorithm. Now that we know how limited backprop is in terms of the representations it can discover, we need to look for better learning algorithms! I’ve done some work in this direction, using combinatorial optimization to learn deep networks. We can also take inspiration from other fields, such as neuroscience, psychology, and evolutionary biology. Or, if we decide that representation learning is not so important after all — which would be a 180-degree change — we can look for other algorithms that can form superpositions of the examples and that are compact and generalize well.

Now that we know better, we can do better.

Read the paper on arXiv here.

December 2, 2020

Allen School’s Pedro Domingos and Daniel Weld elected Fellows of the American Association for the Advancement of Science

The American Association for the Advancement of Science, the world’s largest general scientific society, has named Allen School professor emeritus Pedro Domingos and professor Daniel Weld among its class of 2020 AAAS Fellows honoring members whose scientifically or socially distinguished efforts have advanced science or its applications. Both Domingos and Weld were elected Fellows in the organization’s Information, Computing, and Communication section for their significant impact in artificial intelligence and machine learning research.

Pedro Domingos

Pedro Domingos portrait

Domingos was honored by the AAAS for wide-ranging contributions in AI spanning more than two decades and 200 technical publications aimed at making it easier for machines to discover new knowledge, learn from experience, and extract meaning from data with little or no help from people. Prominent among these, to his AAAS peers, was his introduction of Markov logic networks unifying logical and probabilistic reasoning. He and collaborator Matthew Richardson (Ph.D., ‘04) were, in fact, the first to coin the term Markov logic networks (MLN) when they presented their simple yet efficient approach that combined first-order logic and probabilistic graphical models to support inference learning. 

Domingos’ work has resulted in several other firsts that represented significant leaps forward for the field. He again applied Markov logic to good effect to produce the first unsupervised approach to semantic parsing — a key method by which machines extract knowledge from text and speech and a foundation of machine learning and natural language processing — in collaboration with then-student Hoifung Poon (Ph.D., ‘11). Later, Domingos worked with graduate student Austin Webb (M.S., ‘13) on Tractable Markov Logic (TML), the first non-trivially tractable first-order probabilistic language that suggested efficient first-order probabilistic inference could be feasible on a larger scale.

Domingos also helped launch a new branch of AI research focused on adversarial learning through his work with a team of students on the first algorithm to automate the process of adversarial classification, which enabled data mining systems to adapt in the face of evolving adversarial attacks in a rapid and cost-effective way. Among his other contributions was the Very Fast Decision Tree learner (VFDT) for mining high-speed data streams, which retained its status as the fastest such tool available for 15 years after Domingos and Geoff Hulten (Ph.D., ‘05) first introduced it.

In line with the AAAS’ mission to engage the public in science, in 2015 Domingos published The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Geared to the expert and layperson alike, the book offers a comprehensive exploration of how machine learning technologies influence nearly every aspect of people’s lives — from what ads and social posts they see online, to what route their navigation system dictates for their commute, to what movie a streaming service suggests they should watch next. It also serves as a primer on the various schools of thought, or “tribes,” in the machine learning field that are on a quest to find the master algorithm capable of deriving all the world’s knowledge from data.

Prior to this latest honor, Domingos was elected a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) and earned two of the highest accolades in data science and AI: the SIGKDD Innovation Award from the Association of Computing Machinery’s Special Interest Group on Knowledge Discovery and Data Mining, and the IJCAI John McCarthy Award from the International Joint Conference on Artificial Intelligence.

Daniel S. Weld

Daniel Weld portrait

AAAS recognized Weld for distinguished contributions in automated planning, software agents, crowdsourcing, and internet information extraction during a research career that spans more than 30 years. As leader of the UW’s Lab for Human-AI Interaction, Weld seeks to combine human and machine intelligence to accomplish more than either could on their own. To that end, he and his team focus on explainable machine learning, intelligible and trustworthy AI, and human-AI team architectures to enable people to better understand and control AI-driven tools, assistants, and systems.

Weld has focused much of his career on advanced intelligent user interfaces for enabling more seamless human-machine interaction. Prominent among these is SUPPLE, a system he developed with Kryzstof Gajos (Ph.D., ‘08) that dynamically and optimally renders user interfaces based on device characteristics and usage patterns while minimizing user effort. Recognizing the potential for that work to improve the accessibility of online tools for people with disabilities, the duo subsequently teamed up with UW Information School professor and Allen School adjunct professor Jacob Wobbrock to extend SUPPLE’s customization to account for a user’s physical capabilities as well.

Another barrier that Weld has sought to overcome is the amount of human effort required to organize and maintain the very large datasets that power AI applications. To expedite the process, researchers turned to crowdsourcing, but the sheer size and ever-changing nature of the datasets still made it labor-intensive. Weld, along with Jonathan Bragg (Ph.D., ‘18) and affiliate faculty member Mausam (Ph.D., ‘07), created Deluge to optimize the process of multi-label classification that significantly reduced the amount of labor required compared to the previous state of the art without sacrificing quality. Quality control is a major theme of Weld’s work in this area, which has yielded new tools such as Sprout for improving task design, MicroTalk and Cicero for augmenting decision-making, and Gated Instruction for more accurate relation extraction.

In addition to his technical contributions, AAAS also cited Weld’s impact via the commercialization of new AI technologies. During his tenure on the UW faculty, he co-founded multiple venture-backed companies based on his research: Netbot Inc., creator of the first online comparison shopping engine that was acquired by Excite; AdRelevance, an early provider of tools for monitoring online advertising data that was acquired by Nielsen Netratings; and Nimble Technology, a provider of business intelligence software that was acquired by Actuate. Weld has since gone from founder to funder as a venture partner and member of the Technology Advisory Board at Madrona Venture Group.

Weld, who holds the Thomas J. Cable/WRF Professorship, presently splits his time between the Allen School, Madrona, and the Allen Institute for Artificial Intelligence (AI2), where he directs the Semantic Scholar research group focused on the development of AI-powered research tools to help scientists overcome information overload and extract useful knowledge from the vast and ever-growing trove of scholarly literature. Prior to this latest recognition by AAAS, Weld was elected a Fellow of both the AAAI and the ACM. He is the author of roughly 200 technical papers and two books on AI on the theories of comparative analysis and planning-based information agents, respectively.

Domingos and Weld are among four UW faculty members elected as AAAS Fellows this year. They are joined by Eberhard Fetz, a professor in the Department of Physiology & Biophysics and DXARTS who was honored in the Neuroscience section for his contributions to understanding the role of the cerebral cortex in controlling ocular and forelimb movements as well as motor circuit plasticity, and Daniel Raftery, a professor in UW Medicine’s Department of Anesthesiology and Pain Medicine who was honored in the Chemistry section for his contributions in the fields of metabolomics and nuclear magnetic resonance, including advanced analytical methods for biomarker discovery and cancer diagnosis.

Read the AAAS announcement here and the UW News story here.

Congratulations to Pedro, Dan, and all of the honorees!

November 24, 2020

Porcupine molecular tagging scheme offers a sharp contrast to conventional inventory control systems

MISL logo
MISL logo

Many people have had the experience of being poked in the back by those annoying plastic tags while trying on clothes in a store. That is just one example of radio frequency identification (RFID) technology, which has become a mainstay not just in retail but also in manufacturing, logistics, transportation, health care, and more. And who wouldn’t recognize the series of black and white lines comprising that old grocery-store standby, the scannable barcode? That invention — which originally dates back to the 1950s — eventually gave rise to the QR code, whose pixel patterns serve as a bridge between physical and digital content in the smartphone era.

Despite their near ubiquity, these object tagging systems have their shortcomings: they may be too large or inflexible for certain applications, they are easily damaged or removed, and they may be impractical to apply in high quantities. But recent advancements in DNA-based data storage and computation offer new possibilities for creating a tagging system that is smaller and lighter than conventional methods.

That’s the point of Porcupine, a new molecular tagging system introduced by University of Washington and Microsoft researchers that can be programmed and read within seconds using a portable nanopore device. In a new paper published in Nature Communications, the team in the Molecular Information Systems Laboratory (MISL) describe how dehydrated strands of synthetic DNA can take the place of bulky plastic or printed barcodes. Building on recent developments in nanopore-based DNA sequencing technologies and raw signal processing tools, the team’s inexpensive and user-friendly design eschews the need for access to specialized labs and equipment. 

“Molecular tagging is not a new idea, but existing methods are still complicated and require access to a lab, which rules out many real-world scenarios,” said lead author Kathryn Doroschak, a Ph.D. student in the Allen School. “We designed the first portable, end-to-end molecular tagging system that enables rapid, on-demand encoding and decoding at scale, and which is more accessible than existing molecular tagging methods.”

Diagram of steps in Porcupine tagging system

Instead of radio waves or printed lines, the Porcupine tagging scheme relies on a set of distinct DNA strands called molbits — short for molecular bits — that incorporate highly separable nanopore signals to ease later readout. Each individual molbit comprises one of 96 unique barcode sequences combined with a longer DNA fragment selected from a set of predetermined sequence lengths. Under the Porcupine system, the binary 0s and 1s of a digital tag are signified by the presence or absence of each of the 96 molbits.

“We wanted to prove the concept while achieving a high rate of accuracy, hence the initial 96 barcodes, but we intentionally designed our system to be modular and extensible,” explained MISL co-director Karin Strauss, senior principal research manager at Microsoft Research and affiliate professor in the Allen School. “With these initial barcodes, Porcupine can produce roughly 4.2 billion unique tags using basic laboratory equipment without compromising reliability upon readout.”

Although DNA is notoriously expensive to read and write, Porcupine gets around this by presynthesizing the fragments of DNA. In addition to lowering the cost, this approach has the added advantage of enabling users to arbitrarily mix existing strands to quickly and easily create new tags. The molbits are prepared for readout during initial tag assembly and then dehydrated to extend shelf life of the tags. This approach protects against contamination from other DNA present in the environment while simultaneously reducing readout time later.

Another advantage of the Porcupine system is that molbits are extremely tiny, measuring only a few hundred nanometers in length. In practical terms, this means each molecular tag is small enough to fit over a billion copies within one square millimeter of an object’s surface. This makes them ideal for keeping tabs on small items or flexible surfaces that aren’t suited to conventional tagging methods. Invisible to the naked eye, the nanoscale form factor also adds another layer of security compared to conventional tags.

The Porcupine team: (top, from left) Kathryn Doroschak, Karen Zhang, Melissa Queen, Aishwarya Mandyam; (bottom, from left) Karin Strauss, Luis Ceze, Jeff Nivala

“Unlike existing inventory control methods, DNA tags can’t be detected by sight or touch. Practically speaking, this means they are difficult to tamper with,” explained co-author Jeff Nivala, a research scientist at the Allen School. “This makes them ideal for tracking high-value items and separating legitimate goods from forgeries. A system like Porcupine could also be used to track important documents. For example, you could envision molecular tagging being used to track voters’ ballots and prevent tampering in future elections.”

To read the data in a Porcupine tag, a user rehydrates the tag and runs it through a portable Oxford Nanopore Technologies’ MinION device. To demonstrate, the researchers encoded and then decoded their lab acronym, “MISL,” reliably and within a few seconds using the Porcupine system. As advancements in nanopore technologies make them increasingly affordable, the team believes molecular tagging could become an increasingly attractive option in a variety of real-world settings.

“Porcupine is one more exciting example of a hybrid molecular-electronic system, combining molecular engineering, new sensing technology and machine learning to enable new applications.” said Allen School professor and MISL co-director Luis Ceze

In addition to Ceze, Doroschak, Nivala and Strauss, contributors to the project include Allen School undergraduate Karen Zhang, master’s student Aishwarya Mandyam, and Ph.D. student Melissa Queen. This research was funded in part by the Defense Advanced Research Project Agency (DARPA) under its Molecular Informatics Program and gifts from Microsoft.

Read the paper in Nature Communications here.

November 3, 2020

Allen School professor Yin Tat Lee earns Packard Fellowship to advance the fundamentals of modern computing

Yin Tat Lee, a professor in the Allen School’s Theory of Computation group and visiting researcher at Microsoft Research, has earned a Packard Fellowship for Science and Engineering for his work on faster optimization algorithms that are fundamental to the theory and practice of computing and many other fields, from mathematics and statistics, to economics and operations research. Each year, the David and Lucile Packard Foundation bestows this prestigious recognition upon a small number of early-career scientists and engineers who are at the leading edge of their respective disciplines. Lee is among just 20 researchers nationwide — and one of only two in the Computer & Information Sciences category — to be chosen as members of the 2020 class of fellows. 

“In a year when we are confronted by the devastating impacts of a global pandemic, racial injustice, and climate change, these 20 scientists and engineers offer us a ray of hope for the future,” Frances Arnold, Packard Fellowships Advisory Panel Chair and 2018 Nobel Laureate in Chemistry, said in a press release. “Through their research, creativity, and mentorship to their students and labs, these young leaders will help equip us all to better understand and address the problems we face.”

Lee’s creative approach to addressing fundamental problems in computer science became apparent during his time as a Ph.D. student at MIT, where he earned the George M. Sprowls Award for outstanding doctoral thesis for advancing state-of-the-art solutions to important problems in linear programming, convex programming, and maximum flow. Lee’s philosophy toward research hinges on a departure from the conventional approach taken by many theory researchers, who tend to view problems in continuous optimization and in combinatorial, or discrete, optimization in isolation. Among his earliest successes was a new general interior point method for solving general linear programs that produced the first significant improvement in the running time of linear programming in more than two decades — a development that earned him and his collaborators both the Best Student Paper Award and a Best Paper Award at the IEEE Symposium on Foundations of Computer Science (FOCS 2014). Around that same time, Lee also contributed to a new approximate solution to the maximum flow problem in near-linear time, for which he and the team were recognized with a Best Paper Award at the ACM-SIAM Symposium on Discrete Algorithms (SODA 2014). The following year, Lee and his colleagues once again received a Best Paper Award at FOCS, this time for unveiling a faster cutting plane method for solving convex optimization problems in near-cubic time.

Since his arrival at the University of Washington in 2017, Lee has continued to show his eagerness to apply techniques from one area of theoretical computer science to another in unexpected ways — often to great effect. 

“Even at this early stage in his career, Yin Tat is regarded as a revolutionary figure in convex optimization and its applications in combinatorial optimization and machine learning,” observed his Allen School colleague James Lee. “He often picks up new technical tools as if they were second nature and then applies them in remarkable and unexpected ways. But it’s at least as surprising when he uses standard tools and still manages to break new ground on long-standing open problems!”

One of those problems involved the question of how to optimize non-smooth convex functions in distributed networks to enable the efficient deployment of machine learning applications that rely on massive datasets. Researchers had already made progress in optimizing the trade-offs between computation and communication time for smooth and strongly convex functions in such networks; Lee and his collaborators were the first to extend a similar theoretical analysis to non-smooth convex functions. The outcome was a pair of new algorithms capable of achieving optimal convergence rates for this more challenging class of functions — and yet another Best Paper Award for Lee, this time from the flagship venue for developments in machine learning research, the Conference on Neural Information Processing Systems (NeurIPS 2018).

Since then, Lee’s contributions have included the first algorithm capable of solving dense bipartite matching in nearly linear time, and a new framework for solving linear programs as fast as linear systems for the first time. The latter work incorporates new techniques that are extensible to a broader class of convex optimization problems.

Having earned a reputation as a prolific researcher — he once set a record for the total number of papers from the same author accepted at one of the top theory conferences, the ACM Symposium on Theory of Computing (STOC), in one year — Lee also has received numerous accolades for the quality and impact of his work. These include a Sloan Research Fellowship, a Microsoft Research Faculty Fellowship, a National Science Foundation CAREER Award, and the A.W. Tucker Prize from the Mathematical Optimization Society.

“Convex optimization is the workhorse that powers much of modern machine learning, and therefore, modern computing. Yin Tat is not only a pivotal figure in the theory that underpins our field, but also one of the brightest young stars in all of computer science,” said Magdalena Balazinska, professor and director of the Allen School. “Combined with his boundless curiosity and passion for collaboration, Yin Tat’s depth of knowledge and technical skill hold the promise for many future breakthroughs. We are extremely proud to have him as a member of the Allen School faculty.”

Lee is the fifth Allen School faculty member to be recognized by the Packard Foundation. As one of the largest nongovernmental fellowships in the country supporting science and engineering research, the Packard Fellowship provides $875,000 over five years to each recipient to grant them the freedom and flexibility to pursue big ideas.

Read the Packard Foundation announcement here.

Congratulations, Yin Tat!

October 15, 2020

Allen School, UCLA and NTT Research cryptographers solve decades-old problem by proving the security of indistinguishability obfuscation

Allen School professor Rachel Lin helped solve a decades-old problem of how to prove the security of indistinguishability obfuscation (iO)

Over the past 20 years, indistinguishability obfuscation (iO) has emerged as a potentially powerful cryptographic method for securing computer programs by making them unintelligible to would-be hackers while retaining their functionality. While the mathematical foundation of this approach was formalized back in 2001 and has spawned more than 100 papers on the subject, much of the follow-up work relied upon new hardness assumptions specific to each application — assumptions that, in some cases, have been broken through subsequent cryptanalysis. Since then, researchers have been stymied in their efforts to achieve provable security guarantees for iO from well-studied hardness assumptions, leaving the concept of iO security on shaky ground.

That is, until now. In a new paper recently posted on public archives, a team that includes University of California, Los Angeles graduate student and NTT Research intern Aayush Jain; Allen School professor Huijia (Rachel) Lin; and professor Amit Sahai, director of the Center for Encrypted Functionalities at UCLA, have produced a theoretical breakthrough that, as the authors describe it, finally puts iO on terra firma. In their paper “Indistinguishability Obfuscation from Well-Founded Assumptions,” the authors show, for the first time, that provably secure iO is constructed from subexponential hardness of four well-founded assumptions, all of which have a long history of study well-rooted in complexity, coding, and number theory: Symmetric External Diffie-Hellman (SXDH) on pairing groups, Learning with Errors (LWE), Learning Parity with Noise (LPN) over large fields, and a Boolean Pseudo-Random Generator (PRG) that is very simple to compute. 

Previous work on this topic has established that, to achieve iO, it is sufficient to assume LWE, SXDH, PRG in NC0 — a very simple model of computation in which every output bit depends on a constant number of input bits — and one other object. That object, in this case, is a structured-seed PRG (sPRG) with polynomial stretch and special efficiency properties, the seed of which consists of both a public and a private part. The sPRG is designed to maintain its pseudo-randomness even when an adversary can see the public seed as well as the output of the sPRG. One of the key contributions from the team’s paper is a new and simple way to leverage LPN over fields and PRG in NC0 to build an sPRG for this purpose.

Co-authors Aayush Jain (left) and Amit Sahai

“I am excited that iO can now be based on well-founded assumptions,” said Lin. “This work was the result of an amazing collaboration with Aayush Jain and Amit Sahai, spanning over more than two years of effort.

“The next step is further pushing the envelope and constructing iO from weaker assumptions,” she explained. “At the same time, we shall try to improve the efficiency of the solutions, which at the moment is very far from being practical.”

This is not the first time Lin has contributed to significant advancements in iO in a quest to bring one of the most advanced cryptographic objects into the mainstream. In previous work, she established a connection between iO and PRG to prove that constant-degree, rather than high-degree, multilinear maps are sufficient for obfuscating programs. She subsequently refined that work with Allen School colleague Stefano Tessaro to reduce the degree of multilinear maps required to construct iO from more than 30 to just three. 

More recently, Lin worked with Jain, Sahai, and UC Santa Barbara professor Prabhanjan Ananth and then-postdoc Christian Matt on a new method for constructing iO without multilinear maps through the use of certain pseudo-random generators with special properties, formalized as Pseudo Flawed-smudging Generators (PFG) or perturbation resilient generators (ΔRG). In separate papers, Lin and her co-authors introduced partially hiding functional encryption for constant-degree polynomials or even branching programs, based only on the SXDH assumption over bilinear groups. Though these works still assumed new assumptions in order to achieve iO, they offered useful tools and ideas that paved the way to the recent new construction.  

Lin, Jain and Sahai aim to build on their latest breakthrough to make the solution more efficient so that it works not just on paper but also in real-world applications.

“These are ambitious goals that will need the joint effort from the entire cryptography community. I look forward to working on these questions and being part of the effort,” Lin concluded.

Read the research paper here, and the press release here. Read a related Quanta magazine article here.

October 12, 2020

Garbage in, garbage out: Allen School and AI2 researchers examine how toxic online content can lead natural language models astray

Metal garbage can in front of brick wall
Photo credit: Pete Willis on Unsplash

In the spring of 2016, social media users turned a friendly online chatbot named Tay — a seemingly innocuous experiment by Microsoft in which the company invited the public to engage with its work in conversational learning  — into a racist, misogynistic potty mouth that the company was compelled to take offline the very same day that it launched. Two years later, Google released its Smart Compose tool for Gmail, a feature designed to make drafting emails more efficient by suggesting how to complete partially typed sentences that also had an unfortunate tendency to suggest a bias towards men — leading the company to eschew the use of gendered pronouns altogether. 

These and other examples serve as a stark illustration of that old computing adage “garbage in, garbage out,” acknowledging that a program’s outputs can only be as good as its inputs. Now, thanks to a team of researchers at the Allen School and Allen Institute for Artificial Intelligence (AI2), there is a methodology for examining just how trashy some of those inputs might be when it comes to pretrained neural language models — and how this causes the models themselves to degenerate into purveyors of toxic content. 

The problem, as Allen School Master’s student Samuel Gehman (B.S., ‘19) explains, is that not all web text is created equal.

“The massive trove of text on the web is an efficient way to train a model to produce coherent, human-like text of its own. But as anyone who has spent time on Reddit or in the comments section of a news article can tell you, plenty of web content is inaccurate or downright offensive,” noted Gehman. “Unfortunately, this means that in addition to higher quality, more factually reliable data drawn from news sites and similar sources, these models also take their cues from low-quality or controversial sources. And that can lead them to churn out low-quality, controversial content.”

The team analyzed how many tries it would take for popular language models to produce toxic content and found that most have at least one problematic generation in 100 tries.

Gehman and the team set out to measure how easily popular neural language models such as GPT-1, GPT-2, and CTRL would begin to generate problematic outputs. The researchers evaluated the models using a testbed they created called RealToxicityPrompts, which contains 100,000 naturally occurring English-language prompts,  i.e., sentence prefixes, that models have to finish. What they discovered was that all three were prone to toxic degeneration even with seemingly innocuous prompts; the models began generating toxic content within 100 generations, and exceeded expected maximum toxicity levels within 1,000 generations.

The team — which includes lead author Gehman, Ph.D. students Suchin Gururangan and Maarten Sap, and Allen School professors and AI2 researchers Yejin Choi and Noah Smithpublished its findings in a paper due to appear at the next conference on Findings of Empirical Methods in Natural Language Processing (Findings of EMNLP 2021).

“We found that if just 4% of your training data is what we would call ‘highly toxic,’ that’s enough to make these models produce toxic content, and to do so rather quickly,” explained Gururangan. “Our research also indicates that existing techniques that could prevent such behavior are not effective enough to safely release these models into the wild.”

That approach, in fact, can backfire in unexpected ways, which brings us back around to Tay — or rather, Tay’s younger “sibling,” Zo. When Microsoft attempted to rectify the elder chatbot’s propensity for going on racist rants, it scrubbed Zo clean of any hint of political incorrectness. The result was a chatbot that refused to discuss any topic suggestive of religion or politics, such as the time a reporter simply mentioned that they live in Iraq and wear a hijab. When the conversation steered towards such topics, Zo’s response would become agitated; if pressed, the chatbot might terminate the conversation altogether.

As an alternative to making certain words or topics automatically off-limits — a straightforward solution but one that lacked nuance, as evidenced by Zo’s refusal to discuss subjects that her filters deemed controversial whether they were or not — Gururangan and his collaborators explored how the use of steering methods such as the fine-tuning of a model with the help of non-toxic data might alleviate the problem. They found that domain-adaptive pre-training (DAPT), vocabulary shifting, and PPLM decoding showed the most promise for reducing toxicity. But it turns out that even the most effective steering methods have their drawbacks: in addition to being computationally and data intensive, they could only reduce, not prevent, neural toxic degeneration of a tested model.

The Allen School and AI2 team behind RealToxicityPrompts, top row from left: Samuel Gehman, Suchin Gururangan, and Maarten Sap; bottom row from left: Yejin Choi and Noah Smith

Having evaluated more conventional approaches and found them lacking, the team is encouraging an entirely new paradigm when it comes to pretraining modern NLP systems. The new framework calls for greater care in the selection of data sources and more transparency around said sources, including public release of original text, source URLs, and other information that would enable a more thorough analysis of these datasets. It also encourages researchers to incorporate value-sensitive or participatory design principles when crafting their models.

“While fine-tuning is preferable to the blunt-instrument approach of simply banning certain words, even the best steering methods can still go awry,” explained Sap. “No method is foolproof, and attempts to clean up a model can have had the unintended consequence of shutting down legitimate discourse or failing to consider language within relevant cultural contexts. We think the way forward is to ensure that these models are more transparent and human-centered, and also reflect what we refer to as algorithmic cultural competency.”

Learn more by visiting the RealToxicityPrompts project page here, and read the research paper here. Check out the AI2 blog post here, and a related Fortune article here.

September 29, 2020

Allen School’s Joseph Jaeger and Cornell Tech’s Nirvan Tyagi honored at CRYPTO 2020 for advancing new framework for analyzing multi-user security

Joseph Jaeger (left) and Nirvan Tiyagi

Allen School postdoctoral researcher Joseph Jaeger and visiting researcher Nirvan Tyagi, a Ph.D. student at Cornell Tech, received the Best Paper by Early Career Researchers Award at the 40th Annual International Cryptology Conference (Crypto 2020) organized by the International Association for Cryptologic Research (IACR). Jaeger and Tyagi, who have been working with professor Stefano Tessaro of the Allen School’s Theory and Cryptography groups, earned the award for presenting a new approach to proving multi-user security in “Handling Adaptive Compromise for Practical Encryption Schemes.” 

Jaeger and Tyagi set out to explore a classic problem in cryptography: How can the security of multi-party communication be assured in cases where an adversary is able to adaptively compromise the security of particular parties? In their winning paper, the authors aim to answer this question by presenting a new, extensible framework enabling formal analyses of multi-user security of encryption schemes and pseudorandom functions in cases where adversaries are able to adaptively compromise user keys. To incorporate an adversary’s ability to perform adaptive compromise, they expanded upon existing simulation-based, property-based security definitions to yield new definitions for simulation-based security under adaptive corruption in chosen plaintext attack (SIM-AC-CPA) and chosen ciphertext attack (SIM-AC-CCA) scenarios. Jaeger and Tyagi also introduced a new security notion for pseudorandom functions (SIM-AC-PRF), to simulate adaptive compromise for one of the basic building blocks of symmetric encryption schemes. This enabled the duo to pursue a modular approach that reduces the complexity of the ideal model analysis by breaking it into multiple steps and splitting it from the analysis of the high-level protocol — breaking from tradition in the process.

“Traditional approaches to formal security analysis are not sufficient to prove confidentiality in the face of adaptive compromise, and prior attempts to address this gap have been shown to be impractical and error-prone,” explained Jaeger. “By employing idealized primitives combined with a modular approach, we avoid the pitfalls associated with those methods. Our framework and definitions can be used to prove adaptive security in a variety of well-studied models, and they are easily applied to a variety of practical encryption schemes employed in real-world settings.”

One of the schemes for which they generated a positive proof was BurnBox, a system that enables users to temporarily revoke access from their devices to files stored in the cloud to preserve their privacy during compelled-access searches — for example, when an agent at a border crossing compels a traveler to unlock a laptop or smartphone to view its contents. In another analysis, the authors applied their framework to prove the security of a commonly used searchable symmetric encryption scheme for preserving the confidentiality of data and associated searches stored in the cloud. In both of the aforementioned examples, Jaeger and Tyagi showed that their approach produced simpler proofs while avoiding bugs contained in previous analyses. They also discussed how their framework could be extended beyond randomized symmetric encryption schemes currently in use to more modern nonce-based encryption — suggesting that their techniques will remain relevant and practical as the use of newer security schemes becomes more widespread.

“Joseph and Nirvan’s work fills an important void in the cryptographic literature and, surprisingly, identifies important aspects in assessing the security of real-world cryptographic systems that have been overlooked,” said Tessaro. “It also defines new security metrics according to which cryptographic systems ought to be assessed, and I can already envision several avenues of future research.”

Read the full research paper here.

Congratulations to Joseph and Nirvan!

August 31, 2020

New NSF AI Institute for Foundations of Machine Learning aims to address major research challenges in artificial intelligence and broaden participation in the field

National Science Foundation logo

The University of Washington is among the recipients of a five-year, $100 million investment announced today by the National Science Foundation (NSF) aimed at driving major advances in artificial intelligence research and education. The NSF AI Institute for Foundations of Machine Learning (IFML) — one of five new NSF AI Institutes around the country — will tap into the expertise of faculty in the Allen School’s Machine Learning group and the UW Department of Statistics in collaboration with the University of Texas at Austin, Wichita State University, Microsoft Research, and multiple industry and government partners. The new institute, which will be led by UT Austin, will address a set of fundamental problems in machine learning research to overcome current limitations of the field for the benefit of science and society.

“This institute tackles the foundational challenges that need to be solved to keep AI on its current trajectory and maximize its impact on science and technology,” said Allen School professor and lead co-principal investigator Sewoong Oh in a UW News release. “We plan to develop a toolkit of advanced algorithms for deep learning, create new methods for coping with the dynamic and noisy nature of training datasets, learn how to exploit structure in real-world data, and target more complex and real-world objectives. These four goals will help solve research challenges in multiple areas, including medical imaging and robot navigation.”

Oh is part of a group led by UW colleague Sham Kakade that will collaborate on the development of a toolkit of fast and efficient algorithms for training neural networks with provable guarantees. The group also aims to eliminate human bottlenecks associated with training machine learning models by constructing a new theoretical and algorithmic framework for neural architecture optimization (NAO). The latter has received minimal attention from researchers despite a broad range of potential applications, including the deployment of energy-efficient networks for edge computing and the Internet of Things, more transparent interpretable models to replace so-called blackbox predictions, and automated, user-friendly systems that enable developers to apply deep learning to real-world problems.

Sham Kakade (left) and Sewoong Oh

“The lack of science around NAO is a structural deficit within machine learning that makes us reliant on human intervention for hyper-parameter tuning, which is neither scalable nor efficient,” explained Kakade, who holds a joint appointment in the Allen School and the Department of Statistics. “Using techniques from mathematical optimization and optimal transport, we will automate the process to speed up the training pipeline while significantly reducing its carbon footprint to meet the growing need for academic and commercial applications. Our work will also provide a rigorous theoretical foundation for driving future advances in the field.”

In addition to making progress on NAO and other core machine learning problems, IFML researchers are keen to demonstrate how the results of their work can have real-world impact. To that end, they will apply the new tools and techniques they have developed to multiple use cases where machine learning holds the potential to advance the state of the art, including video compression and recognition, imaging tools for medical applications and circuit design, and robot navigation. The latter effort, which will be spearheaded by Allen School professor Byron Boots, seeks to overcome current limitations on the ability of robots to operate in unstructured environments under dynamic conditions while simultaneously reducing the training burden.

“Room layouts vary, objects can be moved, and humans are generally unpredictable. These conditions pose a challenge to the safe and reliable operation of robots alongside the many users, co-workers, and random passers-by who may share the same space,” noted Boots. “We need to broaden our concept of what constitutes a robot perception task, from one of pure recognition to one where the robot is capable of viewing the environment in the context of goals shaped by interaction and intention. I’m looking forward to working with this team to translate our foundational research into practical solutions for supporting this new paradigm.”

Byron Boots (left) and Jamie Morgenstern

On the human side, a major goal of the IFML is the broadening of participation in AI education and careers to meet expanding workforce needs and to ensure that the field reflects the diversity of society. Institute members will focus their education and workforce development efforts along the entire pipeline, from K-12 to graduate education. Their plans include development of course content for high school students who currently lack access to AI curriculum, the launch of a new initiative aimed at engaging more undergraduate students in AI research, and the build-out of a multi-state, online Master’s program that will leverage faculty from all three member institutions. Allen School professor Jamie Morgenstern, whose research focuses on the social impacts of machine learning, will lead the charge to implement Project 40×24, which aims to increase the number of women participating in AI to represent at least 40% of the field by the year 2024.

“Given the skyrocketing demand for expertise in AI across academia and industry, it should be a national priority to give students and working professionals access to high-quality educational opportunities in this field,” Morgenstern said. “We need to prepare more people from diverse backgrounds to actively participate in shaping the technologies that will have a growing impact on everyone’s lives. And we have a responsibility to ensure that new knowledge and economic opportunities generated by innovations in machine learning are broadly accessible to all.”

Zaid Harchaoui

Zaid Harchaoui, a professor in the Department of Statistics and an adjunct faculty member in the Allen School, rounds out the UW team.

The IFML is one of two NSF AI Institutes announced today with UW involvement. The other is the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography led by the University of Oklahoma in collaboration with UW’s Evans School of Public Policy & Governance and other academic and industry partners. 

Each of the five inaugural NSF AI Institutes will receive $20 million over five years. NSF has cast today’s announcement as the start of a longer term commitment, as the agency anticipates making additional institute announcements in future. The initiative, which represents the United States’ most significant federal investment in AI research and education to date, is a partnership between NSF and the U.S. Department of Agriculture, U.S. Department of Homeland Security, and U.S. Department of Transportation.

Read the NSF announcement here, the UW News release here, and UT Austin’s IFML press release here. Learn more about the NSF AI Institutes here.

August 26, 2020

The “conscience of computing”: Allen School’s Richard Ladner receives Public Service Award from the National Science Board

Richard Ladner portrait with books and framed photos behind him

Allen School professor emeritus Richard Ladner, a leading researcher in accessible technology and a leading voice for expanding access to computer science for students with disabilities, has been named the 2020 recipient of the Public Service Award for an individual from the National Science Board (NSB). Each year, the NSB recognizes groups and individuals who have made significant contributions to the public’s understanding of science and engineering. In recognizing Ladner, the board cited his exemplary science communication, diversity advocacy, and well-earned reputation as the “conscience of computing.”

A mathematician by training, Ladner joined the University of Washington faculty in 1971. For much of his career, he focused on fundamental problems underpinning the field of computer science as one of the founders of what is now the Allen School’s Theory of Computation research group. After making a series of significant contributions in computational complexity and optimization — and later, branching out into algorithms and distributed computing — his career would take an unexpected but not altogether surprising turn toward accessibility advocacy and research.

Ladner enrolled in an American Sign Language course at a local community college, a move that represented a “return to his roots” after growing up in a household where both parents were deaf. That experience spurred him to begin volunteering in the community with people who were deaf and blind and to occasionally write about accessibility issues.

Then, in 2002, Ladner began working with Ph.D. student Sangyun Hahn at the UW. Hahn, who is blind, related to Ladner how he was having trouble accessing the full content of his textbooks; mathematical formulas had to be read aloud to him or converted into Braille, while graphs and diagrams had to be manually traced, labeled in Braille, and printed on an embosser. His student’s frustration was the impetus for Ladner and Hahn to launch the Tactile Graphics project, which automated the conversion of textbook figures into an accessible format. Ladner followed that up with MobileASL, a collaboration with Electrical & Computer Engineering professor Eve Riskin to enable people who are deaf to communicate in American Sign Language using mobile phones. Ladner also mentored many Ph.D. students in accessibility research — among them Anna Cavender (Ph.D., ‘10), who developed technology to consolidate a teacher, display screen, sign language interpreter, and captioning on a single screen; Jeffrey Bigham (Ph.D., ‘09), who developed a web-based screen reader that can be used on any computer without the need to download any software; Information School alumnus Shaun Kane, who developed technology to make touchscreen devices accessible to people who are blind; and Shiri Azenkot (Ph.D., ‘14), who developed a Braille-based text entry system for touchscreen devices. 

Ladner’s approach to accessibility research is driven by the recognition that to build technology that is truly useful, you have to work with the people who will use it. It’s a lesson he took from his earlier experience as a volunteer, and one that he has emphasized with every student who has worked with him since. During his career, Ladner has mentored 30 Ph.D. students and more than 100 undergraduate and Master’s students — many of whom followed his example by focusing their careers on accessible technology research. 

“I visited Richard’s lab at the University of Washington just over 10 years ago. While I did get to see Richard, he was most interested in my meeting his Ph.D. students — and I could see why,” recalled Vicki Hanson, CEO of the Association for Computing Machinery. “Richard had provided an atmosphere in which his talented students could thrive. They were extremely bright, enthusiastic, and all involved in accessibility research. I spent the day talking with his students and learning about their innovative work.

“All were committed to developing technology that would overcome barriers for people with disabilities. Sometimes there are barriers in being able to use technology – in other cases, however, the use of technology actually provides opportunities to remove barriers in various aspects of daily living,” Hanson continued. “Richard’s students were working on both of these aspects of accessibility. The collegial and inspiring interactions among his students would serve as a model of research collaboration for computing labs everywhere.”

Ladner’s impact on students extends far beyond the members of his own lab. In addition to his research contributions and mentorship, Ladner has been a prominent advocate for providing pathways into computer science for students with disabilities. To that end, he has been a driving force behind multiple initiatives designed to engage a population that, until recently, was often overlooked in technology circles.

“When we think about diversity, we must include disability as part of that,” Ladner noted. “The conversation about diversity should always include disability.”

To that end, Ladner has been a leading voice for the inclusion of people with disabilities in conversations around improving diversity in technology. He served as a founding member of the board of the national Center for Minorities and People with Disabilities in Information Technology (CMD-IT). The organization hosts the ACM Richard Tapia Celebration of Diversity in Computing, which attracts an estimated 1,500 attendees of diverse backgrounds and abilities each year. Ladner was also a member of the steering committee that established the Computing Research Association’s Grad Cohort Workshop for Underrepresented Minorities and Persons with Disabilities (URMD) for beginning graduate students. In discussions leading up to the program’s launch, Ladner was instrumental in making sure that the “D” made it into the name and scope of the workshop.

Ladner has also worked directly with colleagues and students around the country to advance diversity in the field. The longest-running of these initiatives is the Alliance for Access to Computing Careers (AccessComputing), which he co-founded with Sheryl Burghstahler, Director of the UW’s DO-IT Center, with funding from National Science Foundation’s Broadening Participation in Computing program. AccessComputing and its 60 partner institutions and organizations support students with disabilities to successfully pursue higher education and connect with career opportunities in computing fields. Since its inception in 2006, that initiative has served nearly 1,000 high school and college students across the country. For seven consecutive years, Ladner also organized the annual Summer Academy for Advancing Deaf and Hard of Hearing in Computing to prepare students to succeed in computing majors and careers.

More recently, Ladner partnered with Andreas Stefik, a professor at the University of Nevada, Las Vegas, on AccessCSForAll. That initiative is focused on developing accessible K-12 curricula for computer science education along with professional development for teachers. The duo also partnered with to review and modify the Computer Science Principles Advanced Placement course to ensure that online and offline course activities met accessibility standards for students with disabilities. This included developing accessible alternatives to visually-based unplugged activities as well as making interactive tools that would work with screen readers. Ladner and his collaborators on the project earned a Best Paper Award at last year’s conference of the ACM’s Special Interest Group on Computer Science Education (SIGCSE 2019) for their efforts.

This past spring, Ladner was one of nine researchers to co-found the new Center for Research and Education on Accessible Technology and Experiences (CREATE) at the UW. The mission of CREATE is to make technology accessible and to make the world accessible through technology. The center, which was established with an inaugural $2.5 million investment from Microsoft, consolidates the efforts of faculty from the Allen School, Information School, and departments of Human Centered Design & Engineering, Mechanical Engineering, and Rehabilitation Medicine who work on various aspects of accessibility. 

“Richard is a gifted scientist and mentor who really helped to put UW on the map when it comes to accessible technology,” said professor Magdalena Balazinska, Director of the Allen School. “As a staunch advocate for innovation that serves all users, his impact on computing education and research cannot be overstated.”

Since his retirement in 2017, Ladner has remained engaged with the Allen School community and continues to invest his time and energy in accessible technology research and increasing opportunities for students with disabilities in computing fields. In accepting this latest accolade — one in a long line of many prestigious awards he has collected during his career — Ladner expressed optimism that accessibility’s importance is recognized by an increasing number of his peers.

“I am honored to receive this recognition from the National Science Board and heartened that the scientific community is rising to the important challenge of supporting students with disabilities,” Ladner said.

Read the NSB press release here, and learn more about Ladner’s career and contributions in a previous Allen School tribute here.

August 11, 2020

Vikram Iyer receives Marconi Society Young Scholar Award after creating a buzz with bio-inspired wireless technologies

No one could accuse Ph.D. student Vikram Iyer of just winging it. Since his arrival at the University of Washington, Iyer has advanced ground-breaking innovations in low-power wireless communication and computation to expand the Internet of Things, from 3D-printable wireless objects capable of storing and transmitting data, to insect-scale platforms that provide a bug’s eye view of the world. As a sign of just how his ideas have taken flight, today Iyer was named one of three recipients of the Paul Baran Young Scholar Award by the Marconi Society.

“By creating low cost, mobile IoT devices that can help answer questions and solve problems in any environment, Vikram’s work supports the Marconi Society’s mission of bringing the opportunity of the network to everyone,” said internet pioneer Vint Cerf, Chair of the Marconi Society, in a press release. “We are proud to welcome him to the Marconi family.”

The award recognizes innovative young engineers who show extraordinary technical acumen, creativity and promise for creating tomorrow’s information and communications technologies to support a digitally inclusive society. The honor came as no surprise to Iyer’s advisor, Allen School professor Shyam Gollakota, who recognized early on that his student would be a highflier.

”Vikram is a one-of-a-kind creative interdisciplinary researcher who is also humble,” said Gollakota. “He develops creative solutions that are at the intersection of hardware, software and biology. In so doing, he transforms what was once science fiction into reality.”

Iyer, a student in the UW Department of Electrical & Computer Engineering, began working with Gollakota in the Networks & Mobile Systems Lab in 2015. Among their first projects together was a collaboration with Allen School and ECE professor Joshua Smith of the Sensor Systems Laboratory on an ultra-low power system to provide wireless connectivity for implantable devices. Interscatter — short for intertechnology backscatter — employs a technique called backscatter communication to convert Bluetooth transmissions to WiFi and ZigBee signals over the air using commodity devices. As part of that project, Iyer and his collaborators created the first prototype contact lens antenna and an implantable neural recording interface capable of communicating directly with smartphones and smart watches. The team earned a Best Paper Award from the Association for Computing Machinery’s Special Interest Group on Data Communications (ACM SIGCOMM).

More recently, Iyer, Gollakota and their colleagues teamed up with professor Sawyer Fuller of the UW Mechanical Engineering Department’s Autonomous Insect Robotics (AIR) Lab to enable new wireless robotic technologies to take flight. The result was Robofly, the world’s first wireless fly-sized drone to achieve liftoff. Unlike previous insect-scale drones, Robofly does not require a wire to the ground to supply power and control signals — a significant achievement on the path toward autonomous robot flight. The team’s bio-inspired design featured dual flapping wings driven by a pair of piezoelectric actuators and directed by a lightweight microcontroller, which issues a series of pulses mimicking the action of a biological fly’s wings. An onboard photovoltaic cell converts light from a laser beam into electricity to power the onboard components without the need for heavy batteries, while the first sub-100 milligram boost converter and piezo driver sufficiently boosts the voltage to enable RoboFly’s ascent. 

While news of Robofly’s exploits took off, Iyer recognized that there are limitations to what a drone insect could do. For one thing, robotic liftoff was difficult to achieve. And commercial drones are limited in how long they can fly uninterrupted.

“This made me wonder, rather than building a system that mimics an insect, could we augment live insects with sensing, computing and communication functionalities to create a mobile IoT platform?” Iyer explained. “We could use this platform to study micro-climates on large farms, answer questions about insects’ behavior or collect air quality data at a more granular level than by using a handful of stationary sensors.”

Iyer with his advisor, Shyam Gollakota, unveiling 3D-printed wireless smart objects in 2017

To explore the idea, Iyer set up an amateur beekeeping operation in a room in the Paul G. Allen Center on campus. The result was Living IoT, a mobile platform that combines sensing, computation, and communication packaged into a tiny wireless backpack light enough to be carried by a bumblebee. The entire system — antenna, envelope detector, sensor, microcontroller, backscatter transmitter, and rechargeable battery — weighed in at just 102 milligrams, or around half a bumblebee’s potential payload. Because the system did not need to power flight, only data collection, the team could keep the weight down by designing the system to transmit data and recharge the battery when the bee returned to the hive each day.

For his latest project, which was recently published in Science Robotics, Iyer’s insect subjects kept their feet on the ground. Building off of the previous work with bees, Iyer and his collaborators created a new wireless backpack containing a tiny, steerable video camera operated via Bluetooth. This time, they fitted their system on two species of beetle to demonstrate the potential for insect-scale robotic vision. Dubbed “BeetleCam,” the system emulates a real bug’s energy-efficient approach to gathering visual information, which relies on head motion independent of its body, while a built-in accelerometer prolongs the battery life by allowing the system to capture images only when the beetle is in motion. Weighing in at a mere 250 milligrams, or roughly half the payload the insects can carry, the system enables the beetles to freely navigate terrain and climb trees.

The team used what it learned to design the world’s smallest power-autonomous terrestrial robot with vision — proving, once again, that good things really do come in small packages.

“This is the first time that we’ve had a first-person view from the back of a beetle while it’s walking around. There are so many questions you could explore, such as how does the beetle respond to different stimuli that it sees in the environment?” Iyer said in a UW press release. “But also, insects can traverse rocky environments, which is really challenging for robots to do at this scale. So this system can also help us out by letting us see or collect samples from hard-to-navigate spaces.”

Iyer and a bumblebee demonstrate Living IoT

Iyer is the second UW student — and second from Gollakota’s lab — to earn this prestigious award. His labmate and frequent collaborator, Rajalakshmi Nandakumar (Ph.D. ‘19), now a faculty member at Cornell Tech, was honored in 2018 for her work on mobile apps for detecting life-threatening health issues. In addition to Iyer, the Marconi Society recognized two other researchers with 2020 Young Scholar Awards: Yasaman Ghasempour at Rice University (soon joining the Princeton University faculty) for her work on efficient, ultra-high speed network connections for next-generation IoT, and Piotr Roztocki at Canada’s Institut National de la Recherche Scientifique (INRS) for his work on scalable quantum resources for “future-proofing” telecommunications network security. The honorees were selected by an international panel of engineers drawn from leading universities and companies.

“Our Young Scholars are the braintrust that will put the speed, security and applications of next generation networks into the hands of billions,” said Cerf.

View Iyer’s Marconi Society profile here, and learn more about the 2020 Young Scholar Awards here. Watch a conversation between Iyer and Marconi Fellow Brad Parkinson here, and check out Iyer’s Geek of the Week profile on GeekWire here.

Congratulations, Vikram!

August 4, 2020

Older Posts »