Skip to main content

Allen School celebrates opening of NVIDIA’s new robotics research lab in Seattle

Allen School director Hank Levy welcomes NVIDIA to Seattle. Credit: NVIDIA

In yet another sign of the Puget Sound region’s emergence as a center of advanced robotics and artificial intelligence research, NVIDIA last week marked the official opening of its new AI Robotics Research Lab just blocks away from the Allen School and University of Washington’s Seattle campus. Led by Allen School professor Dieter Fox, NVIDIA’s new lab in the UW CoMotion building will bring together multidisciplinary teams to focus on the development of next-generation robots that can work safely and effectively alongside humans.

Many UW faculty and students were on hand to toast the new lab, including Allen School director Hank Levy. “I’d just like to say how excited we are to have NVIDIA here in Seattle,” Levy said in remarks welcoming the assembled guests to the lab for the first time. “At this moment, AI is changing the world, and NVIDIA’s hardware is a driving force in that movement.”

NVIDIA CEO Jensen Huang and a guest get hands-on with a touch-sensitive robot on display in the Seattle lab.

The new lab represents a couple of firsts for NVIDIA — not only is it the company’s first research outpost in Seattle, but it is also the very first NVIDIA research lab focused on robotics. Among the highlights of the 13,000 square-foot space is a working kitchen, complete with drawers, cabinets and appliances, in which the team hopes to whip up new capabilities in human-robot collaboration. “We want to ultimately get a robot that can cook a meal with you,” Fox said, “or that you can just talk to it and tell the robot what you want to do.”

To get there, Fox and his colleagues will need to make progress in a variety of areas, spanning artificial intelligence, robotic manipulation, machine learning, computer vision, natural language processing, and more. Noting that the new lab intends to publish its research so that it can be built upon by others — “we aren’t keeping it to ourselves” — Fox emphasized that its work will be a collaborative endeavor through which researchers from NVIDIA, UW, and other leading universities would push the field of robotics forward.

Allen School professors Ed Lazowska (left) and Dieter Fox. Under Fox’s leadership, the new lab intends to collaborate with researchers at UW and other leading universities and publish the results. Credit: NVIDIA

That sentiment was echoed by NVIDIA CEO Jensen Huang, who emphasized that the culture of collaboration that underpins UW research was also “the perfect culture for creating a robotics platform” in a city that he regards as “one of the greatest hubs of computer science in the world,” thanks to the presence of UW, Microsoft, Amazon, and many others.

“Robotics is going to change the world,” Levy said, “and having an NVIDIA lab this close to UW, started by Allen School professor Dieter Fox, gives us an incredible opportunity to work together to advance the state of the art. We are looking forward to long-term and successful collaborations between this lab, our faculty and students, and other members of the Seattle tech community.”

To learn more, read NVIDIA’s blog post and check out coverage by The Daily, GeekWire, MIT Tech Review, IEEE Spectrum, Robotics & Automation News, The Robot Report, Engadget, Hot Hardware, eTeknix, Neowin, SD Times, and the Puget Sound Business Journal (subscription required).

Read more →

Mobile app developed by UW researchers offers people a “Second Chance” in the event of an opioid overdose

A person interacts with the Second Chance mobile app, activating the "monitor" function before using opioids
Credit: Mark Stone/University of Washington

Someone in the United States dies from an opioid overdose every 12 and a half minutes, according to data from the National Institute on Drug Abuse, and the rise in fatalities stemming from illicit opioid use is widely recognized as a public health epidemic. Many of these deaths could be prevented by rapid detection and intervention, including the administration of naloxone to reverse the effects of an overdose. Now, thanks to researchers in the Allen School’s Networks & Mobile Systems Lab and UW Medicine’s Department of Anesthesiology & Pain Medicine, a solution for preventing opioid-related deaths may be at hand. In a paper published today in Science Translational Medicine, the team describes a new contactless smartphone app capable of detecting signs of opioid overdose. Called Second Chance, the app converts the phone’s speaker and microphone into an active sonar system to unobtrusively monitor a person’s breathing and movements from distances of up to three feet, looking for patterns that indicate that they may be in danger.

According to one of those researchers, Allen School professor Shyam Gollakota, the ultimate goal in developing the app is not only to monitor a person’s condition, but eventually be able to connect users immediately with potentially life-saving treatment. “The idea is that people can use the app during opioid use so that if they overdose, the phone can potentially connect them to a friend or emergency services to provide naloxone,” Gollakota explained in a UW News release.

But first, the team had to develop a reliable algorithm that would work in real-world settings. Gollakota and his collaborators — Allen School Ph.D. student Rajalakshmi Nandakumar and Dr. Jacob Sunshine, a physician scientist at UW Medicine — looked northward to Insite, the first legal supervised injection site in North America. Located in Vancouver, British Columbia, Insite hosts approximately 500 supervised injections per day. There, Nandakumar and her colleagues were able to gather data on individuals’ breathing patterns before and after opioid injection in a safe setting — gaining valuable insights into how Second Chance might function in actual situations where opioids are being used.

“The participants prepares their drugs like they normally would, but then we monitored them for a minute pre-injection so the algorithm could get a baseline value for their breathing rate,” Nandakumar explained. “After we got a baseline, we continued monitoring during the injection and then for five minutes afterward, because that’s the window when overdose symptoms occur.”

Those symptoms might include cessation of breathing for 10 seconds or more — known as post-injection central apnea — and opioid induced respiratory depression, in which a person’s respiratory rate slows significantly to seven or fewer breaths per minute. In addition to measuring a person’s breathing, the app is capable of detecting changes in a person’s posture, such as a slumping of the head, which could indicate that they are in danger. In cases where someone is alone with no one to witness symptoms such as these, an app like Second Chance could be their only means of getting help. For this reason, the researchers couldn’t leave the effectiveness of their algorithm to chance; they had to be confident that the app would do what it was designed to in the wild. To that end, they looked for a way to test the app on symptoms consistent with an overdose without putting anyone at risk.

Overdose events at facilities like Insite are rare by design. But Sunshine, an anesthesiologist, knew of another type of facility where many of the same symptoms can be safely simulated: the operating room. “When patients undergo anesthesia, they experience much of the same physiology that people experience when they’re having an overdose,” he explained. “Nothing happens when people experience this event in the operating room because they’re receiving oxygen and they are under the care of an anesthesiology team.”

The smartphone interface shows the Second Chance app dialing 9-1-1 for emergency intervention after detecting signs of an overdose
Credit: Mark Stone/University of Washington

The team worked with Sunshine’s colleagues in UW Medicine to test the algorithm in what amounted to a real-world simulation of an overdose, with the help of healthy patients undergoing elective surgery who offered their informed consent. During their regularly scheduled procedures, the participants were administered standard anesthetic medications that prompted 30 seconds of slowing or stopped breathing while being monitored by Second Chance. In 19 out of 20 cases, the app correctly detected the symptoms that correlate to an overdose; in the one example that it didn’t, the breathing rate was just above the threshold of what would be considered a sign of overdose.

Having validated their approach, the researchers aim to commercialize Second Chance through Sound Life Sciences, Inc., a digital therapeutics company spun out of the University of Washington, and seek approval from the U.S. Food and Drug Administration (FDA). While the app is technically capable of measuring symptoms consistent with an overdose of any form of opioid, including prescription opioids taken by mouth, the researchers are quick to point out that so far, they have only tested it in scenarios involving use of illicit injectable opioids — the most common source of death by overdose. As Sunshine points out, it’s a human toll that is completely preventable with timely intervention, which as the name of the app suggests, would offer people the proverbial second chance.

“The goal of this project is to try to connect people who are often experiencing overdoses alone to known therapies that can save their lives,” he said. “We hope that by keeping people safer, they can eventually access long-term treatment.”

Learn more about Second Chance in the Science Translational Medicine paper here, the UW News release here, and a related UW Medicine story here. Check out coverage by Scientific American, MIT Technology Review, Science News, CNBC, Mother Jones, U.S. News & World Report, Axios, Futurism, CNET, Fast Company, Engadget, New Atlas, The Verge, Smithsonian, KOMO News, UPI, Tech Times, TechSpot, the Associated Press, and MD Magazine. Listen to Nandakumar discussing the Second Chance app on NPR’s Science Friday here, and watch a related Reuters video here.

Read more →

Ras Bodik, Alec Wolman, and Aaron Hertzmann recognized as Fellows of the ACM for outstanding contributions to the field of computing

Association for Computing Machinery logo

Three members of the Allen School family were recently named Fellows of the Association for Computing Machinery (ACM) in recognition of their professional achievements. Professor Rastislav (Ras) Bodik of the Allen School’s Programming Languages & Software Engineering (PLSE) group, former postdoc Aaron Hertzmann of Adobe Research, and alumnus Alec Wolman (Ph.D., ‘02) of Microsoft Research were among the 56 ACM members worldwide to be recognized in the 2018 class of Fellows for their outstanding technical contributions in computing and information technology and their service to the computing community.

“In society, when we identify our tech leaders, we often think of men and women in industry who have made technologies pervasive while building major corporations,” said ACM President Cherri M. Pancake in a press release. “At the same time, the dedication, collaborative spirit and creativity of the computing professionals who initially conceived and developed these technologies goes unsung. The ACM Fellows program publicly recognizes the people who made key contributions to the technologies we enjoy.”

Ras Bodik

Ras Bodik portrait

In Ras Bodik’s case, those contributions center on his work in algorithmic program synthesis, a field that he helped start in the mid-2000’s. Program synthesis — sometimes referred to as automatic programming — simplifies the process of writing computer programs by asking the computer to search for a program that accomplishes user’s goals. Over the last 15 years, Bodik and his collaborators demonstrated practical applications of program synthesis by developing efficient algorithms and making them available to programmers by combining language design with new programmer interaction models.

“One of the benefits of program synthesis is that it makes computing more accessible, even as our systems increase in scale and complexity,” said Bodik. “By making it possible to start from an incomplete specification, such as user demonstrations, program synthesis opens up programming to novice users while it eases the process for those of us who write programs for a living.”

Bodik has shown program synthesis to be a versatile technique that can benefit experts and non-experts alike and attack problems with real-world impact. Bodik’s interest in program synthesis started at the University of Wisconsin, where he worked on mining program specifications from software corpuses. When he was a faculty member at the University of California, Berkeley, he and then-Ph.D. student Armando Solar-Lezama, now a member of the faculty at MIT, laid the groundwork for modern program synthesis through so-called program sketches and by reducing the search problem to SAT constraint solving. To help programmers create new synthesizers, he collaborated on Rosette, a solver-aided programming framework that was developed by Allen School professor Emina Torlak. Rosette demonstrated how program synthesis can make up for the absence of compilers in certain domains and avoid error-prone, low-level code by enabling programmers to produce domain-specific tools for verification, synthesis, and debugging.

An attractive feature of program synthesis is that it can be applied outside the realm of computer science to solve problems in cases where the solution can be modeled as a program — a feature that Bodik has been keen to exploit to generate practical solutions for scientists working in other domains. For example, he and his team have enabled the use of program synthesis by biologists to infer cellular models from mutation experiments, and by data scientists to simplify the layout of data visualizations.

As Bodik and his colleagues have expanded program synthesis into a broadening array of applications, they have taken an interdisciplinary approach to research that has led to advancements in multiple computing domains, including more efficient algorithms to enable synthesis of large SQL queries and the extension of programming by demonstration to the web browser. The Helena project, for example, enabled many teams of social scientists collect web datasets which they use to help city governments design new policies.

Bodik is also credited with spurring the rapid advancement of the synthesis community to where it has achieved parity with human programmers on at least a dozen tasks that typically require months of training. Half of those milestones are based on work produced by Bodik and his collaborators.

“We have reached the point where novice programmers can generate programs that function as well or better than those created by experts,” Bodik observed. “Program synthesis, especially when combined with other technologies for human-computer interaction, can be a great leveler.”

Aaron Hertzmann

Aaron Hertzmann portrait

Aaron Hertzmann, a former postdoc in the Allen School’s Graphics & Imaging Laboratory (GRAIL) from 2001 to 2002 and an affiliate faculty member since 2005, was recognized for his contributions spanning computer graphics, non-photo realistic rendering, computer animation, and machine learning. After leaving the Allen School, Hertzmann spent 10 years as a faculty member in the Computer Science Department at the University of Toronto before joining Adobe Research, where he is a principal scientist focused on computer vision and computer graphics.

Hertzmann is known for his work on new methods for extracting meaning from images and modeling the human visual system, as well as the creation of robust software tools for creating expressive, artistic imagery and animation in the style of human painting and drawing. Early in his career, as a member of New York University’s Media Research Laboratory, Hertzmann contributed new techniques for non-photorealistic rendering that combined the expressivity of those natural media with the flexibility of computer graphics. For example, he developed a method for painterly rendering to create images that appear hand-painted from photographs using multiple brush sizes and long, curved brush strokes. He extended the concept to video with new methods for “painting over” successive frames of animation to produce a novel visual style — work that he later built upon to produce AniPaint, an interactive system for generating painterly animation from video sequences that granted users more direct control over stroke synthesis. Among Hertzmann’s many other contributions to computing for art and design are new algorithms for generating line-art illustrations of smooth surfaces, a new method for computing the visible contours of a smooth 3D surface for stylization, and tools for automatically creating graphic design layouts and generating interactive layout suggestions.

Hertzmann has also worked on a number of projects for building computational models of human motion and for perceiving the 3D structure of people and objects in video that gained traction in the broader graphics and special effects communities. For example, Hertzmann contributed to the development of the style machine, a statistical model that can generate new motion sequences based on learned motion patterns from a series of motion capture sequences — the first system for “animation by example” that has since gained popularity. Other contributions include an inverse kinematics system that produces real-time human poses based on a set of constraints that subsequently was deployed in the gaming industry, and Nonlinear Inverse Optimization, a novel approach for generating realistic character motion based on a dynamical model derived from biomechanics.

More recently, Hertzmann has turned his attention to virtual reality (VR). One of his projects that has gained prominence is Vremiere, a system for enabling direct editing of spherical video in immersive environments. This work formed the basis of Adobe’s Project Clover in-VR video editing interface announced at its MAX Creative Conference in 2016 and earned the team a Best Paper Honorable Mention at last year’s ACM Conference on Computer-Human Interaction (CHI 2017). Hertzmann worked with the same group of collaborators to produce CollaVR, a system that enables collaborative review and feedback in immersive environments for multiple users.

“My post-doctoral experience at UW, and the long-term collaborations that arose from it, were some of the richest of my career,” Hertzmann said about his time with GRAIL. “They broadened my experience into several research areas that were new to me.”

Alec Wolman

Alec Wolman portrait

Alec Wolman earned his Ph.D. from the Allen School working with professors Hank Levy and Anna Karlin. He is a principal researcher in Microsoft’s Mobility and Networking Research Group, where he manages a small team of researchers and developers. His research interests span mobile systems, distributed systems, operating systems, internet technologies, security, and wireless networks. In naming him a 2018 Fellow, ACM highlighted his many contributions in the area of trusted mobile systems and services.

One of those contributions was fTPM — short for “firmware Trusted Platform Module” — which was implemented in millions of Windows smartphones and tablets and represented the first implementation of the TPM 2.0 specification. fTPM enables Windows on ARM SoC platforms to offer TPM-based security features including BitLocker, DirectAccess, and Virtual Smart Cards. fTPM leverages ARM TrustZone to implement these secure services on mobile devices and offers security guarantees similar to a discrete TPM chip — one of the most popular forms of trusted hardware in the industry. Wolman was also one of the researchers behind Trusted Language Runtime (TLR), a system that made it easy for smartphone app developers to build and run trusted applications while offering compatibility with legacy software and operating systems. In addition, he contributed to software abstractions for trusted sensors used in mobile applications; the cTPM system to extend trusted computing abstractions across multiple mobile devices; and Sentry, which protects sensitive data on mobile devices from low-cost physical memory attacks.

Wolman’s contributions in distributed systems and cloud services has had a significant impact on Microsoft’s products, serving many millions of users. Recently, he helped design and develop Microsoft Embedded Social (ES), a scalable Azure service that enables application developers to incorporate social engagement features within their apps in a fully customizable manner. ES has been incorporated in nearly a dozen applications and has served roughly 20 million users to date. Previously, Wolman co-developed the partitioning and recovery service (PRS) as a component of the Live Mesh file synchronization product. PRS, which enables data distribution across a set of servers with strong consistency as a reusable component, was later incorporated into the cloud service infrastructure for Windows Messenger and Xbox Live.

Wolman’s work on offloading computations — including Mobile Assistance Using Infrastructure (MAUI) and Kahawai — demonstrated how mobile devices can leverage both the cloud and edge computing infrastructure and are considered seminal pieces of research that influenced thousands of follow-on papers. Wolman and his team also collaborated with Allen School researchers on the development of MCDNN, a framework for mobile devices using deep neural networks without overtaxing resources such as battery life, memory, and data usage.

The Fellows Program is the ACM’s most prestigious member level and represents just one percent of the organization’s global membership. The ACM will formally honor the 2018 Fellows at its annual awards banquet to be held in San Francisco, California in June.

Learn more about the 2018 class of ACM Fellows here.

Congratulations to Ras, Aaron, and Alec!

Read more →

Ph.D. student Ewin Tang recognized in Forbes’ “30 Under 30” in science for taking the “quantum” out of quantum computing

Ewin Tang

Allen School Ph.D. student Ewin Tang has landed a spot on Forbes’ 2019 list of “30 Under 30” in science for developing a method that enables a classical computer to solve the “recommendation problem” in roughly the same time that a quantum computer could — upending one of the most prominent examples of quantum speedup in the process. Her algorithm offers an efficient solution to a core machine learning problem which models the task of predicting user preferences from incomplete data based on people’s interactions with sites such as Amazon and Netflix.

Tang, who arrived at the University of Washington this past fall to work with professor James R. Lee in the Theory of Computation group, tackled the recommendation problem as an undergraduate research project while at the University of Texas at Austin. The project was an outgrowth of a quantum information course she took with UT Austin professor Scott Aaronson, who challenged her to prove that no fast classical algorithm exists for solving the problem. He was inspired to set this particular challenge after Iordanis Kerenidis and Anupam Prakash — researchers at Université Paris Diderot and University of California, Berkeley, respectively — published a quantum recommendation algorithm that could solve the problem exponentially faster than a classical computer could, in part by relying on a randomized sample of user preferences rather than attempting to reconstruct a full list, in fall 2016. But they did not prove definitively that no such classical algorithm existed.

Enter Tang and Aaronson. According to an article about Tang’s discovery that appeared last summer in Quanta Magazine, Aaronson believed that a comparable algorithm didn’t exist, and that his student would confirm Kerenidis’ and Prakash’s discovery. But as Tang worked through the problem during her senior year, she started to believe that, actually, it did exist. After Tang presented her results at a workshop at UC Berkeley that June, other members of the quantum computing community agreed — confirming it as the fastest-known classical algorithm and turning a two-year-old discovery on its head.

Or, as Tang recently explained to GeekWire, “We ended up getting this result in quantum machine learning, and as a nice side effect a classical algorithm popped out.”

The quantum algorithm relies on sampling to make the process of computing high-quality recommendations more efficient, built on the assumption that most users and their preferences can be approximated by their alignment with a small number of stereotypical user preferences. This approach enables the system to bypass reconstruction of the complete preference matrix, in which many millions of users and elements may be represented, in favor of honing in on the highest-value elements that matter most when giving recommendations. Tang employs a similar strategy to achieve similar results — proving that a classical algorithm can produce recommendations just as well – and nearly as fast – as the quantum algorithm can, without the aid of quantum computers.

According to Tang’s Ph.D. advisor, Lee, her achievement extends far beyond the original question she set out to answer.

“Ewin’s work provides more than just a (much) faster algorithm for recommendation systems — it gives a new framework for the design of algorithms in machine learning,” Lee said. “She and her collaborators are pursuing applications in clustering, regression, and principal component analysis, which are some of the most fundamental problems in the area. Her work is also a step toward clarifying the type of structure quantum algorithms must exploit in order to achieve an exponential speedup.”

Check out Tang’s Forbes profile here and a related GeekWire article here. Read the original story of her discovery in Quanta Magazine here.

Way to go, Ewin!

Read more →

University of Washington researchers create a buzz with Living IoT system that replaces drones with bees

Bumblebee wearing the Living IoT backpack while foraging on a flower.

A team of researchers in the Networks & Mobile Systems Lab led by Allen School professor Shyam Gollakota and the Autonomous Insect Robotics (AIR) Laboratory led by Mechanical Engineering professor Sawyer Fuller have designed a new mobile platform that combines sensing, computation, and communication in a package small enough to be carried by a bumblebee. Dubbed Living IoT, the system allows nature to take its course while enabling new capabilities in agricultural and environmental monitoring.

Living IoT’s reliance on biological, rather than mechanical, flight presents new opportunities for continuous sensing without having to repeatedly recharge power-hungry batteries throughout the day. However, it did present some novel challenges for the team, not least of which was how create a form factor small enough and light enough to ride on the back of a bee that could still power data collection and communication.

“We decided to use bumblebees because they’re large enough to carry a tiny battery that can power our system, and they return to a hive every night where we could wirelessly recharge the batteries,” explained Vikram Iyer, a Ph.D. student in Electrical & Computer Engineering and co-primary author on the research paper.

The resulting design, which incorporates an antenna, envelope detector, sensor, microcontroller, backscatter transmitter, and rechargeable battery, weighs in at a minuscule 102 milligrams — roughly half of a bumblebee’s potential payload and less than the maximum weight the team determined the insect could carry without interfering with takeoff or controlled flight.

Vikram Iyer at the UW's urban farm with a one of the Living IoT bumblebees

“The rechargeable battery powering the backpack weighs about 70 milligrams,” noted Allen School Ph.D. student Rajalakshmi Nandakumar in a UW News release. “So we had a little over 30 milligrams left for everything else, like the sensors and the localization system to track the insect’s position.”

The battery offers seven hours of uninterrupted data collection time before it has to be recharged. As the bees fly around, their onboard sensors collect data such as temperature, humidity, and light intensity and store it for later upload back at the hive. That upload happens wirelessly using backscatter communication, a technique honed by members of the Networks & Mobile Systems Lab to enable a range of IoT applications.

To enable location-based sensing and data tracking in the absence of flight control, the researchers came up with a novel approach for self-localization that relies on passive operations in place of power-hungry radio receiver components. Instead, strategically positioned access point (AP) radios broadcast signals that are received by the bees as they go about their business. Using changes in the signal amplitude extracted from the onboard envelope detector, the team is able to determine the insect’s angle relative to each AP at various points throughout the day and triangulate its position to localize the sensor data. According to Allen School Ph.D. student Anran Wang, the system can detect the bee’s position within 80 meters of the antennas — roughly three-quarters of the length of a football field.

For the moment, the system is limited to roughly 30 kilobytes of data storage. If that can be expanded to include tiny cameras live streaming information about the condition of crops in the field or even the bees themselves, the notion of using live insects in place of drones for smart agriculture and other applications could really take off.

“Having insects carry these sensor systems around could be beneficial for farms because bees can sense things that electronic objects, like drones, cannot,” explained Gollakota. “With a drone, you’re just flying around randomly, while a bee is going to be drawn to specific things, like the plants it prefers to pollinate. And on top of learning about the environment, you can also learn a lot about how the bees behave.”

The team will present its research paper at MobiCom 2019, the Association for Computing Machinery’s 25th International Conference on Mobile Computing and Networking.

Read the UW News release here and visit the Living IoT project page here. Check out coverage in GeekWire, NBC MachCNBCTechCrunch, Digital TrendsViceMIT Technology ReviewNew Atlas, FuturityEngadget, Futurism, InverseForbesKOMO News, KUOW, The Seattle Times, Seattle PIBusiness Insider, Digital Journal, and IEEE Spectrum.

Photo credits: Mark Stone/University of Washington

Read more →

MISL researchers earn Best Student Paper Award for designing and demonstrating content-based media search directly in DNA

Kendall Stewart onstage

Callista Bee presents a system for content-based media search in DNA

Researchers in the Molecular Information Systems Lab (MISL) have taken another step forward in their quest to develop a next-generation data storage system with the introduction of new mechanisms for content-based similarity search of digital data stored in synthetic DNA. The team, which includes researchers from the University of Washington and Microsoft, took home the Best Student Paper Award in recognition of its work from the 24th International Conference on DNA Computing and Molecular Programming (DNA 24) in October.

The winning paper describes an end-to-end DNA-based architecture for content-based similarity search of stored media — in this case, image files. The MISL team’s contribution includes a novel neural-network-based sequence encoder trained on more than 30,000 images from the Caltech256 image dataset, and a laboratory experiment demonstrating the technique on a set of 100 test images.

Instead of encoding and storing the complete image files, the researchers concentrated on building a database for storing and retrieving their associated metadata. Each image’s encoded DNA strand includes a “feature sequence” representing its semantic features, as well as an “ID sequence” pointing to the location of the complete file in another database.

By adapting a technique from the machine learning community called “semantic hashing” to work with DNA sequences, the team designed an encoder to output feature sequences that react more strongly with the feature sequences of similar images. This enables a molecular “fishing hook”: when a molecule representing a query image is added to the database, similar images react with and stick to the query. The resulting query-target pairs can then be extracted using standard laboratory techniques like magnetic bead filtration.

Diagram of the training methodology showing training images of binoculars and a sailboat.

The training methodology used for the sequence encoder. A neural network translates images into DNA sequences, which are compared for similarity. If the sequences are similar but the images are not – or vice versa – the neural network is updated to increase its accuracy.

According to Allen School Ph.D. student and lead author Callista Bee, this type of efficient, content-based search mechanism will be key to unlocking DNA’s potential as the digital storage medium of the future.

“We’re approaching a time when zettabytes of data will be produced each year — a daunting challenge that requires us to think beyond the current state of the art,” Bee explained. “Our approach takes advantage of DNA’s near-data processing capabilities while borrowing from the latest machine learning techniques to present one possible solution for the indexing and retrieval of content-rich media.”

Bee’s co-authors on the work include Microsoft researcher Yuan-Jyue Chen, MISL members David Ward and Xiaomeng “Aaron” Liu, Allen School and Electrical & Computer Engineering professor Georg Seelig, Allen School professor Luis Ceze, and Microsoft senior researcher and Allen School affiliate professor Karin Strauss.

The team validated its design in wet-lab experiments using 100 target images and 10 query images, with 10 similar images for each query image included in the target set. The results, Bee said, were “moderately successful,” with visually similar files accounting for 30% of the sequencing reads for each query, despite comprising only 10% of the total database.

The researchers believe their approach can be generalized to databases containing any type of media. While it would be a challenge to scale such a system to larger and more complex datasets, the project opens up a promising new avenue of exploration around DNA-based data processing and content-based search.

Read the research paper here and learn more about MISL’s work here.

Congratulations to the entire team! Read more →

Allen School’s Yin Tat Lee earns Best Paper Award at NeurIPS 2018 for new algorithms for distributed optimization

Yin Tat LeeA team of researchers that includes professor Yin Tat Lee of the Allen School’s Theory of Computation group has captured a Best Paper Award at the 32nd Conference on Neural Information Processing Systems (NeurIPS 2018). The paper presents two new algorithms that achieve optimal convergence rates for optimizing non-smooth convex functions in distributed networks, which are commonly used for machine learning applications to meet the computational and storage demands of very large datasets.

Performing optimization in distributed networks involves trade-offs between computation and communication time. Recent progress in the field has yielded optimal convergence rates and algorithms for optimizing smooth and strongly convex functions in such networks. In their award-winning paper, Lee and his co-authors extend the same theoretical analysis to solve an even thornier problem: how to optimize these trade-offs for non-smooth convex functions.

Based on its analysis, the team produced the first optimal algorithm for non-smooth decentralized optimization in the setting where the slope of each function is uniformly bounded. Referred to as multi-step primal dual (MSPD), the algorithm surpasses the previous state-of-the-art technique — the primal-dual algorithm — that offered fast communication rates in a decentralized and stochastic setting but without achieving optimality. Under the more challenging global regularity assumption, the researchers present a simple yet efficient algorithm known as distributed randomized smoothing (DRS) that adapts the existing randomized smoothing optimization algorithm for application in the distributed setting. DRS achieves a near-optimal convergence rate and communication cost.

Lee contributed to this work in his capacity as a visiting researcher at Microsoft Research and a faculty member at the Allen School. His collaborators include lead author Kevin Scaman, a research scientist at Huawei Technologies’ Paris-based machine learning lab; Microsoft senior researcher Sébastien Bubeck; and researchers Francis Bach and Laurent Massoulié of Inria, the French National Institute for Research in Computer Science and Automation. Members of the team will present the paper later today at the NeurIPS conference taking place this week in Montreal, Canada.

Read the research paper here.

Congratulations, Yin Tat!

  Read more →

Allen School kicks off Computer Science Education Week with Code.org and the Computer Science Teachers Association

Hadi Partovi onstage to kick off Computer Science Education Week

Code.org’s Hadi Partovi kicks off the Computer Science Education Week celebration on the UW Seattle campus.

The Allen School teamed up with Code.org and the Computer Science Teachers Association (CSTA) on Monday to host the kickoff event marking the 2018 start of Computer Science Education Week and the largest-ever Hour of Code.  The event featured special guests Melinda Gates, co-chair and trustee of the Bill & Melinda Gates Foundation, and Brad Smith, president and chief legal officer of Microsoft.

Gates engaged in a question and answer session with a local student about her inspiration for pursuing computer science and how efforts to engage girls in computer science early are helping to address the gender gap in computing. In addition to giving a shout-out to her high school math teacher, Mrs. Susan Bauer, Gates acknowledged the efforts of the Allen School that have seen us increase our percentage of CS degrees awarded to women to nearly twice the national average. “This isn’t because of chance,” Gates noted. “It’s because they worked at it.”

Noting that she was lucky to have a teacher who introduced her to computer science 35 years ago, Gates hoped that today’s students wouldn’t have to rely on luck to discover computing thanks to programs like Code.org, Girls Who Code, and Black Girls Code. “Not every girl is going to have a Mrs. Bauer to introduce her to computing,” Gates said. “But if we build the right support system, she won’t need one.”

In addition to energizing the room full of teachers, administrators, and students about engaging in computer science learning, Gates joined Code.org co-founder and CEO Hadi Partovi and CSTA Executive Director Jake Baskin in celebrating the recipients of the Champions for Computer Science Awards. These awards recognize individuals and organizations that are working to make computer science education accessible to everyone — including 2018 honoree and Allen School professor emeritus Richard Ladner.

Richard Ladner accepting his award from Melinda Gates

Richard Ladner (left) accepts his Champions for Computer Science Award from Melinda Gates as Hadi Partovi (right) looks on

Ladner was named a Computer Science Champion based on his leadership in launching and growing AccessCSForAll, a set of curriculum and professional development tools that empower teachers and schools to make computer science accessible to K-12 students with disabilities. CSTA and Code.org credited Ladner and collaborators Andreas Stefik and the Quorum programming team, for their leadership in ensuring that the drive to increase CS educational opportunities is truly inclusive for all students. Happily, Ladner points out, the accessibility mindset is becoming more ingrained not just in education, but in industry, too — with more companies now focused on developing technologies and products that are built with accessibility in mind to serve the broadest community of users. “Accessibility is becoming mainstream,” Ladner said. “It’s not something one person does, it’s something everyone does.”

Smith — a longtime friend of the Allen School and a staunch advocate for computer science education, generally — ended the program on a high note by announcing Microsoft’s commitment of an additional $10 million to Code.org. The funding will support the organization’s efforts to make CS-related professional development available to teachers in every school, as well as its advocacy for state-level policies that will increase access to computer science education for K-12 students across the nation.

Allen School ambassadors welcome students and parents to the 2018 Computing Open House

Allen School ambassadors welcome a group of students and parents to the 2018 Computing Open House

While today marked the official kickoff, the Allen School began celebrating Computer Science Education Week early, at our Computing Open House this past weekend. The open house is an annual tradition that coincides with both the start of CS Education Week and the birthday of computer programming pioneer Grace Hopper. More than 500 local students and parents descended upon the Paul G. Allen Center for Computer Science & Engineering for tours, research demos, and talks by industry representatives on how their computer science education helped them to launch fulfilling careers creating technologies that benefit people around the world. Students at the open house had a chance to learn about computer security, program an Edison robot, construct their own messages in binary and DNA, and get hands-on with the latest smartphone-based apps for mobile health.

Thanks to everyone who joined us, whether in person or online, to support computer science education — and congratulations to Richard and the team at Quorum for their much-deserved recognition!

Learn more about CS Education Week here and the Hour of Code here. View all of the Champions for Computer Science honorees here.

  Read more →

Allen School team earns Best Paper Award at SenSys 2018 for new low-power wireless localization system

Vikram Iyer and Rajalakshmi Nandakumar holding their SenSys 2018 Best Paper Awards

Vikram Iyer (left) and Rajalakshmi Nandakumar, recipients of the SenSys 2018 Best Paper Award for µLocate

Researchers in the Allen School’s Networks & Mobile Systems Lab have developed the first low-power 3D localization system for sub-centimeter sized devices. The system, µLocate, enables continuous object tracking on mobile devices at distances of up to 60 meters — even through walls — while consuming mere microwatts of power. The team behind µLocate was recognized with a Best Paper Award at the recent Conference on Embedded Networked Sensor Systems (SenSys 2018) organized by the Association for Computing Machinery in Shenzhen, China — demonstrating once again that good things really do come in small packages.

Localization is integral to the Internet of Things (IoT). While WiFi radios and ultra-wideband radios are capable of fulfilling this function, their power needs are such that they can only operate for about a month using smaller coin cell or button cell batteries. On the other hand, radio frequency identification tags (RFID) require significantly less power and are a suitable size, but their limited range and inconsistent operation through barriers make them unsuitable for use in whole-home and commercial IoT applications.

“When it comes to tracking IoT devices, existing localization systems typically hit a wall due to their functional limitations or outsize power needs that make them impractical for real-world use.” observed Allen School Ph.D. student Rajalakshmi Nandakumar, lead author on the paper. “µLocate overcomes these barriers by eliminating the need for bulky batteries while enabling us to localize across multiple rooms using devices that are smaller than a penny.”

Nandakumar and her colleagues — Electrical & Computer Engineering Ph.D. student Vikram Iyer and Allen School professor Shyam Gollakota — developed a novel architecture that builds on the lab’s previous, pioneering work on long-range backscatter. That project introduced the first low-power, wide-area backscatter communication system using chirp spread spectrum (CSS) modulation. For this latest iteration, the team’s hardware design consists of an access point and a receiver that incorporates a low-power microcontroller equipped with a built-in oscillator. The access point transmits the CSS signal, which the receiver shifts by 1-2 MHz before backscattering it to the access point. The system is designed to operate concurrently across the 900 MHz, 2.4 GHz, and 5 GHz ISM radio bands; the access point extracts and combines the phase information from the backscattered signals across the spectrum to localize the target.

The team designed two diminutive receiver prototypes, each of which could last between five and 10 years running on two button cell batteries: a multi-band prototype capable of operating across the aforementioned ISM bands at longer range, and an even smaller single-band prototype that operates at 5 GHz for short-range applications. The researchers evaluated µLocate in an open field and an office building to test its performance in line-of-sight and walled settings, respectively, demonstrating its ability to return a location value within 70 milliseconds at ranges of up to 60 meters. They also deployed the system in multiple real-world scenarios, including several single- and multi-story residences and across multiple rooms in a hospital wing.

“This was our lab’s first foray into both wireless device localization and the design of tiny, programmable devices,” noted Gollakota. “We’re excited that our work in this new line of research is already being recognized within the sensing community.”

To learn more about µLocate, read the research paper here.

Way to go, team!

  Read more →

Allen School welcomes nine new faculty with expertise in cryptography, data science, machine learning, and more

The Allen School is thrilled to introduce nine outstanding educators and researchers who have joined or will soon join our faculty in the current academic year. The new arrivals strengthen our leadership in areas such as software engineering, networking, machine learning, and computer science education while enabling us to expand into exciting new territory spanning cryptography, computational design for manufacturing, and data science for human health and well-being.

Meet the latest additions to our scholarly community and discover how they will contribute to the University of Washington’s reputation for educational excellence and leading-edge innovation:

Tim Althoff, data science for health and well-being

TTim Althoff portraitim Althoff will arrive at the Allen School in January 2019. Althoff’s research focuses on leveraging the detailed sensor and social data resulting from people’s interactions with smart devices and social networks to address pressing societal challenges. To that end, he develops novel computational methods for modeling human behavior in order to generate actionable insights into people’s health and well-being. Althoff’s work, which combines elements of data science, social network analysis, and natural language processing, is highly interdisciplinary and has applications that extend beyond computing to fields such as medicine and psychology.

For one project, Althoff worked with colleagues in engineering and medicine to perform a global analysis of physical activity using smartphone date for more than 700,000 people in 111 countries. The goal of the study was to better understand physical activity patterns and identify factors related to gender, health, income level, and the built environment that contribute to “activity inequality” — and lead to more than 5 million deaths each year. The team’s analysis, which was published in the journal Nature, was the largest-ever study of human physical activity of its kind, representing 68 million days of data covering billions of individual steps. In another study, Althoff and collaborators at Microsoft Research examined the impact of the popular mobile game Pokémon Go on physical activity levels by drawing upon data from 32,000 users of the Microsoft Band fitness tracker. The researchers concluded that mobile gaming apps like Pokémon Go have the potential to more effectively increase activity levels among low-activity populations than health-focused apps or other existing interventions.

In addition to his focus on physical wellness, Althoff has explored how computing can help combat mental illness, a major global health issue that affects more than 43 million adults in the United States alone. While psychotherapy and counseling are important tools in the treatment of mental health issues, researchers have lacked sufficient data on which conversation strategies are most effective. To address this shortcoming, Althoff and his colleagues employed natural language processing techniques to produce the most extensive quantitative analysis of the linguistic aspects of text-based counseling conversations to date. By using sequence-based conversation models, message clustering, word frequency analyses, and other methods, the team was able to identify the conversation strategies most associated with successful patient outcomes. The resulting paper earned Althoff and his co-authors Best Paper at the annual conference of the International Medical Informatics Association (IMIA 2017).

Althoff earned his Ph.D. in computer science from Stanford University, where he was a member of the Stanford InfoLab, Stanford Mobilize Center, and the group behind the Stanford Network Analysis Project (SNAP). He previously received his bachelor’s and master’s degrees in computer science from the University of Kaiserslautern, Germany. His work has been covered by The New York Times, The Wall Street Journal, The Economist, the BBC, CNN, and many others.

René Just, software engineering and security

René Just portraitFormer Allen School postdoc René Just returned to UW this fall from the University of Massachusetts Amherst, where he was a faculty member in the College of Information and Computer Sciences. Just’s research focuses on advancing software correctness, robustness, and security; it spans static and dynamic program analysis, mobile security, empirical software engineering, and applied machine learning. He is particularly interested in the development of novel techniques for automated testing and debugging that scale to real-world software systems. He also develops research and educational infrastructures with a focus on reproducibility and comparability of empirical research.

After earning his bachelor’s degree in computer science from the Cooperative State University Heidenheim, Just spent nearly two years in industry as a software design engineer for German company Fritz & Macziol. He returned to academia to earn his master’s and Ph.D. from the University of Ulm in Germany before taking up a position as a postdoctoral research associate at UW, where he worked with Allen School professor Michael Ernst in the Programming Languages & Software Engineering (PLSE) group.

Just’s research to advance the state of the art in software testing and debugging already has had an extensive impact within the software engineering community and earned him three Distinguished Paper Awards from the Association for Computing Machinery. For one award-winning project, Just and his collaborators studied the relationship between artificial faults, called mutants, and real faults, and the suitability of mutants for software testing research. This foundational work, which has received more than 260 citations in four years, provides strong empirical support for the use of mutants but also identifies inherent limitations. Just’s Major mutation framework played an important role in this project as it enables efficient mutation analysis of large software systems and fundamental research involving hundreds of thousands of mutants. For another award-winning project, Just and his collaborators analyzed the effectiveness and limitations of three popular automated test generation tools on more than 350 real-world faults. Their evaluation revealed that even using the most advanced techniques, fewer than 20 percent of the generated test suites detected a fault, and 15 percent of all generated tests were flaky—they failed randomly or generated false-positive warnings.

Just’s interest in the reproducibility of software engineering research led him to develop Defects4J, a first-of-its-kind repository of reproducible, isolated, and annotated faults coupled with a framework for experimentation and extension. Like the Major mutation framework, Defects4J is widely used in software engineering research and in undergraduate and graduate software engineering courses around the world. Since its inception in 2014, Defects4J has been referenced more than 300 times and has been used in the evaluation of multiple award-winning papers at top-tier research conferences.

Huijia “Rachel” Lin, cryptography

Rachel Lin portraitRachel Lin is one of two new hires set to bring exciting new expertise to the Allen School in the field of cryptography starting in January 2019, after more than four years as a faculty member at the University of California, Santa Barbara. Lin’s research in cryptography, as well as at its intersection with security and theoretical computer science, aims to weaken or even supplant our reliance on trust-based approaches to securing confidential data — for example, trust between the corporate client and cloud service provider, or between citizens and government agencies — to cryptographically-enforced approaches that provide stronger privacy and integrity guarantees.

While current systems are fallible in that trust can be eroded or lost, that is not the only challenge; the risk of a single point of failure producing a large-scale privacy breach is ever-present. Lin is interested in reducing or removing this risk using techniques such as program obfuscation, wherein programs are rendered unintelligible while their functionality is maintained, and secure multiparty computation that enables a set of mutually distrustful entities to compute a function on their corresponding private data without revealing the data in the clear. Among the hurdles to advancing such alternatives to trust-based security is proving their feasibility based on well-studied, reliable, computational hardness assumptions, such as the hardness of factoring integers — proof of which Lin has already made significant progress. This is particularly true of her work in indistinguishability obfuscation (IO), a promising avenue of cryptography research for which she earned a CAREER Award from the National Science Foundation (NSF) last year. In a series of projects, she has succeeded in simplifying the algebraic structures needed to construct IO. In one example, Lin established a connection between IO, one of the most advanced cryptographic objects, with the pseudorandom generator (PRG), one of the most basic and familiar cryptographic primitives to prove that constant-degree, rather than high-degree, multilinear maps are sufficient for obfuscating programs. She subsequently refined this work to reduce the degree of multilinear maps required to construct IO from more than 30 to a grand total of three — a crucial step toward her goal of achieving IO from bilinear maps or other standard, well-studied objects, such as lattices.

Lin and her collaborators have demonstrated IO’s potential to be a powerful tool with multiple applications, including an alternative approach to designing fully homomorphic encryption without relying on lattices or the untested circular security assumptions, and ensuring the adaptive integrity and adaptive privacy of delegating Random Access Memory (RAM) computation to a cloud. She has also focused on advancing the state of the art in zero-knowledge proofs, non-malleability, and multi-party computation. On the latter, Lin earned a Best Paper Award at EUROCRYPT 2018 for achieving new levels of efficiency in multi-party computation protocols through the introduction of a novel framework for garbled interactive circuits, among other innovations.

Before joining the UC Santa Barbara Faculty, Lin spent two years as a postdoctoral researcher at MIT. She earned her master’s and Ph.D. in computer science, with a minor in applied mathematics, from Cornell University and her bachelor’s in computer science from Zhejiang University in China.

Ratul Mahajan, networking

Ratul Mahajan portraitAllen School alumnus Ratul Mahajan (Ph.D., ’05) will return to his alma mater in January as a faculty member after spending 13 years in industry as a researcher and entrepreneur. During that time, Mahajan has focused on the development of new architectures, systems, and tools for making cloud networks more efficient, agile, and reliable — first during more than a decade at Microsoft Research in Redmond, and more recently, as Cofounder and CEO of Seattle-based startup Intentionet. Mahajan — a self-described “computer systems researcher with a networking focus” — draws inspiration and techniques from multiple domains, including machine learning, theory, human-computer interaction, formal verification, and programming languages. He has applied these techniques to a variety of research questions in the areas of Internet measurement, wireless networks, mobile systems, smart home systems, and network verification.

Cloud computing is a particularly compelling area of exploration for Mahajan — not only for the way it has transformed computing infrastructure on a practical level, but its suitability as a vessel for realizing new research ideas in real-world systems. One of those ideas is centralized resource allocation, a concept that turns the typical distributed approach to allocating resources across computer networks on its head. Aiming to move beyond the inefficiency and rigidity of the traditional model, Mahajan and his colleagues developed their software-driven wide area network (SWAN) to enable the centralized allocation of bandwidth in global networks connecting multiple datacenters in the cloud. SWAN was designed to achieve high efficiency, while satisfying policies such as preferential treatment for priority services, by taking a global view of traffic demand and rapidly analyzing and updating the network’s forwarding behavior in response. The researchers devised a novel technique to prevent transient congestion during demand-driven reconfigurations by leveraging a small amount of scratch capacity on the network links. In subsequent work, the team expanded the concept to the delivery of online services via an integrated infrastructure through Footprint, and introduced a network “operating system,” Statesman, that mediates between multiple, independently running applications to achieve a target state which preserves network-wide safety and performance invariants.

Mahajan and his colleagues borrowed from the programming languages and formal verification communities in building a set of sophisticated analysis and synthesis tools for simplifying network configuration tasks and eliminating configuration errors — the primary contributor to the dreaded network outage. They began with Batfish, a tool that enables general analysis of a network’s configuration based on its packet-forwarding behavior. The team followed that up with a pair of tools designed to move network configuration beyond testing to verification: abstract representation of control plane (ARC) for rapidly analyzing correctness under arbitrary failures, and efficient reachability analysis (ERA) for performing efficient network reachability analyses of various incarnations of a network using a symbolic model of the network control plane. To simplify the underlying task of generating correct configurations in the first place, Mahajan and his colleagues built Propane, the first system capable of generating border gateway protocol (BGP) configurations that are provably correct. Propane, which earned the team Best Paper at SIGCOMM 2016, was followed by Propane/AT, a configuration-synthesis system that relies on abstract topologies to support network evolution.

Before enrolling in the Allen School’s Ph.D. program, where he was advised by professors Tom Anderson and David Wetherall, Mahajan received his bachelor’s degree from the Indian Institute of Technology in Delhi. He has earned numerous accolades for his research; in addition to the Best Paper for Propane, Mahajan has been recognized by ACM SIGCOMM with a Best Student Paper, Rising Star, and Test of Time awards; the William R. Bennett prize by the Institute of Electrical and Electronics Engineers (IEEE), Best Paper by the Haifa Verification Conference (HVC), Best Dataset Award by the Internet Measurement Conference (IMC); and the Microsoft Significant Contribution Award for his work on the company’s cloud computing infrastructure.

Sewoong Oh, machine learning

Sewoong Oh portrait

Sewoong Oh will bring his expertise in machine learning to the Allen School in January after six years on the faculty of the Department of Industrial and Enterprise Systems at the University of Illinois at Urbana-Champaign. Oh focuses his research at the intersection of theory and practice, with an emphasis on the development of new algorithmic solutions for machine learning applications using techniques drawn from information theory, coding theory, applied probability, stochastic networks, and optimization.

One of the topics that Oh has been keen to explore is the rise of social-media sharing via anonymous messaging platforms. Spurred by an interest in how people use such platforms to support freedom of expression and ensure personal safety, Oh and his colleagues developed a set of novel techniques for safeguarding users’ anonymity against adversaries attempting to uncover the source of potentially sensitive messages online. The researchers developed a novel messaging protocol, adaptive diffusion, for which they earned the Best Paper Award at the Association of Computing Machinery’s International Conference on Measurement and Modeling of Computer Systems (ACM SIGMETRICS 2015). Adaptive diffusion rapidly spreads anonymous messages on an underlying contact network, such as a network of phone contacts or Facebook friends. Oh and his team demonstrated that a perfect obfuscation of the source is guaranteed when the communication graph is an infinite regular tree. They went a step further with the development of preferential attachment adaptive diffusion (PAAD), a new family of protocols to counteract adversarial attempts to identify a message source through statistical inference in real-world social networks. Oh then turned his attention from preserving individuals’ anonymity in a crowd to leveraging the wisdom of the crowd, receiving an NSF CAREER Award to advance the algorithmic foundations of social computing and explore how the data generated by online communities can be harnessed to address complex societal challenges.

Oh is also interested in advancing the state of the art in training algorithms for generative adversarial networks (GANs). GANs are leading-edge techniques for training generative models to produce realistic examples of images and texts using competing neural networks. They are particularly promising in image and video processing and for dialogue systems and chatbots. One of their shortcomings, however, is that they tend to produce samples with little diversity — even when trained on diverse data sets. Oh and his colleagues set out to better understand and mitigate this phenomenon, known as mode collapse, by examining GANs through the lens of binary hypothesis testing. This novel perspective yields a formal mathematical definition of mode collapse that allows one to represent a pair of distributions — the target and the generator — as a two-dimensional region. The analysis of these mode collapse regions leads to a new framework, PacGAN, which naturally penalizes generator distributions with more mode collapse during the training process. With PacGAN, the discriminator makes decisions based on multiple “packed” samples from the same class — either real or artificially generated — rather than treating each sample as a single input. This idea of packing is shown to be fundamentally related to the concept of mode collapse, with a packed discriminator inherently penalizing mode collapse. The researchers’ approach, which can be applied to any standard GAN, enables the generator to more readily detect a lack of diversity and combat mode collapse without requiring the significant computational overhead or delicate fine-tuning of hyperparameters associated with previous approaches.

Oh received the ACM SIGMETRICS Rising Star Research Award in recognition of his outstanding early-career contributions matrix factorization, statistical learning, and non-convex optimization. Before joining the UIUC faculty, Oh was a postdoctoral researcher in MIT’s Laboratory of Information and Decision Sciences (LIDS). He earned his Ph.D. in electrical engineering from Stanford University and his bachelor’s in electrical engineering from Seoul National University in Korea.

Hunter Schafer, computer science education

Hunter Schafer portraitHunter Schafer joined the Allen School faculty this fall as a full-time lecturer. The appointment was a homecoming for Schafer, who graduated from the Allen School’s combined bachelor’s/master’s program this past spring. During the course of his studies, Schafer served as a part-time lecturer, teaching assistant (TA), and head TA for multiple sections of our introductory computer science courses.

As head TA for CSE 143, the follow-on to our popular introductory programming course, Schafer coordinated roughly 30 to 40 TAs per quarter. In this role, he helped to develop course goals, coordinated TA staff meetings, shared teaching best practices, and trained TAs in the grading of assignments — all in addition to handling his own quiz sections. In addition to playing a vital role in introducing hundreds of fellow students to computer science, Schafer served as a TA for CSE 373, the Allen School’s upper-division course on data abstractions and algorithms for non-CSE majors.

Outside of his teaching duties, Schafer spent time in industry as a software development intern. While at Socrata, he was responsible for creating a tool in Python for classifying the semantic meaning of a column in a dataset that improved the user experience and could be extended to other classification problems. He also worked as part of a team defining the Python libraries needed to support the company’s work on diverse machine learning problems. As an intern at Redfin, Schafer developed a tool to assist realtors in presenting information contained in a potential buyer’s offer in a dynamic and visually appealing way.

Foreshadowing his future career path, last summer Schafer went to the head of the class as the lecturer for CSE 143 that quarter, overseeing a class of more than 100 students and a staff of seven TAs. That fall, while working towards his master’s degree, Schafer led an honors discussion section of CSE 142, introducing his students to topics in machine learning, data visualization, and online privacy to supplement their programming lessons. Throughout his time as an instructor, Schafer has aimed to help students see the relevance of computer science in their day to day lives in addition to introducing them to concepts of interest beyond core programming.

Adriana Schulz, computational design for manufacturing

Adriana Schulz portraitAdriana Schulz, who arrived at the Allen School this fall after earning her Ph.D. at MIT, focuses on computational design to drive the next great wave of manufacturing innovation. Drawing upon her expertise in computer graphics and inspired by recent hardware advances in 3D printing, industrial robotics, and automated whole-garment knitting, Schulz develops novel software tools that empower users to create increasingly complex, integrated objects.

At its core, Schulz’s work aims to democratize design and manufacturing through computation. To that end, she has produced numerous tools and data-driven algorithms to render the process more efficient and accessible to people of different skill levels, including those without domain expertise, while optimizing performance. For example, Schulz and her colleagues devised a way to interpolate parametric data from Computer Aided Design (CAD) models — nearly ubiquitous in professional design and manufacturing — and incorporated it into an interactive tool, InstantCAD, that enables users to quickly and easily gauge how changes to a mechanical shape’s geometry will impact its performance without the time-consuming and computationally expensive operations required by traditional CAD tools. Schulz and her fellow researchers also developed an algorithm and interactive visualization tool for exploring multiple, sometimes conflicting, design and performance trade-offs — enabling designers to efficiently evaluate and navigate such compromises. The team presented both projects at the International Conference & Exhibition on Computer Graphics and Interactive Techniques (ACM SIGGRAPH).

Schulz has applied a similar approach to empower novice users to create their own functioning robots, flying drones, and even custom-built furniture. As a member of MIT’s Computational Fabrication Group, she co-led the development of Interactive Robogami, a framework for creating robots out of flat sheets that can be folded into 3D structures — reminiscent of origami, the Japanese art of paper folding. The system enables users to compose customized robot designs from a database of 3D-printable robot parts and test their functionality via simulation before moving ahead with fabrication. She also contributed to an interactive system for designing and fabricating multicopters. With its intuitive interface, the system enables even novice users to assemble a working aerial drone design while exploring tradeoffs between criteria such as size, payload, and battery usage. More recently, Schulz was part of the team that designed AutoSaw, a template-based system that enables the design and robot-assisted fabrication of custom carpentry items — an approach that could help usher in a new era of mass customization.

Before her arrival at MIT, Schulz earned her master’s degree in mathematics from the National Institute of Pure and Applied Mathematics and her bachelor’s in electronics engineering from the Federal University of Rio de Janeiro in Brazil. She is the co-author of the book Compressive Sensing and her work has been featured in Wired, TechCrunch, MIT Technology Review, New Scientist, IEEE Spectrum, and more.

Stefano Tessaro, cryptography

Stefano Tessaro portraitStefano Tessaro will join the Allen School faculty in January after spending more than five years on the faculty of the University of California, Santa Barbara, where he holds the Glen and Susanne Culler Chair in Computer Science. Tessaro brings expertise in a variety of topics related to the foundations and applications of cryptography and its connections to disciplines such as theoretical computer science, information theory, and computer security. Together, he and fellow newcomer Rachel Lin will build upon the Allen School’s existing strengths in security and theory to expand into this exciting new area of research.

Tessaro is particularly interested in the concept of provable security, in which formal definitions of threat models enable rigorous security proofs. To that end, he often pursues theoretical advances that enable the application of provable security to real-world cryptography. For example, his work uses techniques from complexity theory, information theory, combinatorics and probability theory to study the effective security of methods for encryption and authentication, to yield new and improved mechanisms for password protection, and to explore tradeoffs between input/output efficiency and security for cloud computing applications.

Throughout his research, Tessaro’s goal has been to identify problems and solutions that blend practical value with theoretical depth. This principle is evident in his pioneering work — for which he earned an NSF CAREER Award — that seeks to reconcile the need for provable security guarantees with real-world efficiency demands. In a paper that earned the Best Paper Award at EUROCRYPT 2017, Tessaro and co-authors presented security proofs for practical schemes to protect passwords against attacks using custom-made hardware, overcoming technical barriers in characterizing the power of memory-limited adversaries. He also developed new information-theoretic techniques to analyze algorithms in symmetric cryptography. This work is the closest that theory researchers have come to understanding why basic cryptographic building blocks like the Advanced Encryption Standard (AES) — the most widely used cryptographic algorithm — withstand a broad class of attacks. Tessaro also studies and formalizes emerging threats, such as large-scale attackers (e.g., state actors) leveraging their ability to collect vast amounts of Internet traffic. He proved, for example, that core components of Internet-scale protocols like Transportation Layer Security (TLS) are well designed to resist such attacks. Tessaro’s work does not only deal with proofs; he recently surfaced attacks against standards for Format-Preserving Encryption, a widely adopted tool for data protection in the financial industry.

Last year, Tessaro was recognized with an Alfred P. Sloan Research Fellowship for his contributions to the theoretical foundations of cryptography — contributions that extend to their practical application to large-scale systems in the form of efficiency tradeoffs in searchable symmetric encryption (SSE) and the first oblivious storage system that is secure under concurrent access. Prior to his arrival at UC Santa Barbara, Tessaro held postdoctoral research positions at MIT and the University of California, San Diego. He earned his master’s and Ph.D. in computer science from ETH Zurich in Switzerland.

Brett Wortzman, computer science education

Brett Wortzman portraitBrett Wortzman is no stranger to the Allen School community, having previously taught multiple sections of our introductory courses on a part-time basis and served as an instructor for our DawgBytes summer camps.

Wortzman began his career more than a decade ago as a software engineer in the technology industry after earning his bachelor’s in computer science from Harvard University. He soon discovered that he preferred the classroom to the conference room, however, and decided to make the leap to education. This fall, he is introducing a new class of UW students to the wonders of computer science as a full-time faculty member.

After earning his master’s in education from UW, Wortzman embarked on his new career by teaching computer science and mathematics at Issaquah High School. There, he was responsible for growing the school’s lone Advanced Placement computer science class — which originally reached a grand total of 16 students — into a robust program encompassing four unique computer science courses divided into eight sections and serving more than 200 students annually. His successful efforts earned him an Inspirational Teachers award from the Allen School last year.

In addition to his formal teaching duties, Wortzman has served in a variety of roles for TEALS, an organization that aims to increase youth access to computer science by partnering with K-12 teachers and schools to build sustainable CS education programs. He is also active in the Puget Sound Computer Science Teachers Association and is an avid organizer of educator meet-ups to build community and share best practices.

The latest round of new arrivals follows the addition of Hannaneh Hajishirzi, whose expertise spans artificial intelligence, natural language processing, and machine learning, this past summer. This talented cohort builds upon the Allen School’s recent success in bringing recognized leaders and rising stars to UW and the Seattle region, including roboticist Sidd Srinivasa and computer engineer Michael Taylor, human computer interaction experts Jennifer Mankoff and Jon Froehlich, machine learning researcher Kevin Jamieson, and theoretical computer scientist Yin Tat Lee.

Welcome, one and all, to the Allen School family! Read more →

« Newer PostsOlder Posts »