Skip to main content

‘One of a kind’: Allen School administrator extraordinaire Jennifer Worrell receives College of Engineering Professional Staff Award

Portrait of Jennifer Worrell wearing a black and tan patterned shirt with a black scarf around her neck and draped over one shoulder, smiling and leaning with her arms folded against a concrete wall inside the Gates Center atrium. The atrium is softly lit, with black and metal railings of two floors of the building visible behind her.

“There is not one area of the school that she does not touch in some way.”

“She” is Jennifer Worrell, the Allen School’s director of finance and administration. And that observation was made by a colleague advancing her successful nomination for a 2023 Professional Staff Award from the University of Washington College of Engineering. Each year, the College of Engineering Awards honor faculty, research and teaching assistants, and staff like Worrell whose extraordinary contributions benefit the college community.

Now approaching two decades of service at the Allen School, Worrell started out as an office manager and moved into successively more complex roles — event coordinator, grants manager, lead grants manager — before stepping into her current position in 2017. In highlighting her achievements since taking over as the school’s first new DFA in more than 30 years, the College noted that Worrell’s “combination of warmth and organizational know-how contributes to a culture that benefits her team and the Allen School as a whole.” 

That combination makes Worrell so effective at her job, some in the school are convinced that she possesses special powers. 

“Jen is like the great and powerful Oz,” said Kellus Stone, operations analyst at the Allen School and author of the aforementioned letter. “She’s the woman behind the curtain who makes sure everything runs smoothly as folks go about their business without giving a second thought as to how it all works.”

“How it all works” has only become more complex in recent years owing to the roll-out of new systems for managing everything from payroll to print jobs that coincided with a period of rapid growth. That growth has led to the school doubling its degree production, doubling its physical space, and surpassing $75 million in yearly expenditures — with roughly half going toward research.

“Jen has been a key contributor to the Allen School’s success and why it is thriving and growing,” said Megan Russell, assistant director of human resources. “Any time there is a need for someone to fill a gap, Jen raises her hand and says, ‘I’ll do it.’ When an employee says they’re overwhelmed, she responds with, ‘What can I do to help?’ 

“She will never take any credit for it, but she deserves it,” Russell continued. “We are all better for her presence here.”

Despite her can-do attitude and willingness to fill any gap, Worrell would have been forgiven for questioning her presence here after enduring a trial by fire immediately upon ascending to her position. When she took the reins of the school’s Business Office, her first task was to implement and train her team on a new online payroll system, Workday, that was being rolled out across the University. If that wasn’t sufficiently daunting, her second task was to fill two open positions responsible for entering Allen School payroll into this same system after the incumbents left shortly after the big roll-out.

There were times, in those early days, that Worrell wasn’t sure how she would make it past lunch, let alone to the end of the day. But make it she did, repeatedly rising to the occasion while overseeing not only the Business Office, but also Research Administration and Human Resources. Two more teams, Facilities & Operations and Events, would be added to her portfolio later. Each time she was called upon for advice or assistance in response to a crisis, she would answer with a genuine smile on her face — and a genuine concern for the wellbeing of her colleagues.

“Jen leads the entire business administration team, and yet when I have meetings with her, she is fully present and engaged, offering helpful solutions and encouragement,” said Amber Cochran, assistant director of events for the Allen School. “She has the ability to make each staff member feel seen and valued.”

She also leads by example as the de facto head of the Staff Executive Committee, a group that comprises staff directors and assistant directors responsible for various functions that make up the administrative and operational side of the school. This group, which encompasses not only Worrell’s functional teams but also Undergraduate Student Services, Graduate Student Services, Technical Support, External Relations and Communications, engages in high-level organizational planning and develops unified policies and procedures along with consistent messaging across the entire school.

The role is challenging enough on a good day; it reached a whole new level when COVID-induced remote working scattered those directors, assistant directors and their teams across the region — and sometimes even farther afield. And yet, Worrell worked with her colleagues to quickly adapt, taking steps to ensure staff maintained a sense of connection and had the resources they needed remain both agile and resilient in the face of uncertainty.

“Jen is the linchpin — she is the central pillar of our school,” said Magdalena Balazinska, professor and director of the Allen School. “She has very deep expertise and can answer any question on almost any topic. We couldn’t have achieved our current growth without her help and leadership.”

Worrell is extending that help and leadership to assist with the latest overhaul of campus-wide systems known as the University of Washington Finance Transformation (UWFT). Her colleague Debbie Carnes, who serves with her on the Process Transformation Team, has witnessed firsthand how Worrell has employed her professional skills and personal empathy to assist UWFT program staff in understanding how changes to the administration of research grants and other fiscal processes are likely to impact operational staff college-wide.

”Jen’s longtime service to the College, ability to come up with innovative and creative solutions, resourcefulness and positivity are an asset to us all,” said Carnes, administrator for the UW Department of Chemical Engineering. “I cannot think of anyone more deserving of this award than Jen.”

Aside from her well-earned reputation as a skilled leader and a veritable fountain of knowledge about how the University works, perhaps the greatest endorsement Worrell has collected is that of colleagues who point to the time and care that she gives them — even when she is busy. Make that especially when she is busy.

“Not long ago, I ran into Jen in the hallway as she was rushing from one meeting to the next. Despite that, she stopped and asked me how I was doing,” recalled Stone. “I started to answer and then stopped myself and apologized, as I could see she was in a hurry. ‘It’s okay,’ she replied with a smile. ‘You are important, too.’

“I believe that encapsulates who Jen is at her core,” continued Stone. “She is one of a kind.”

Members of the Allen School will gather to celebrate Worrell’s recognition, and that of UW Distinguished Staff Award recipient Chloe Dolese Mandeville, at a reception on June 14. 

Learn more about the College of Engineering Awards here.

Read more →

‘Not a job for a mere mortal’: Assistant Director for Diversity & Access Chloe Dolese Mandeville receives UW Distinguished Staff Award

Studio portrait of Chloe Dolese Mandeville smiling against a black background.

Champion, advocate, role model…based on her colleagues’ descriptions, Chloe Dolese Mandeville sounds like a regular Girl Scout. Which, it so happens, she is: for the past two and a half years, the Allen School’s Assistant Director for Diversity & Access has volunteered as a troop leader for the Girl Scouts of Western Washington, hosting activities on campus and inspiring girls to see computing as a potential career path.

It is but one example of the many ways in which Dolese Mandeville has helped students to engage with the field — efforts that have now earned her a 2023 Distinguished Staff Award from the University of Washington. Part of the UW Awards of Excellence, the Distinguished Staff Award is the highest honor bestowed upon staff by the University. 

As the saying goes, not all heroes wear capes.

“Chloe’s responsibilities are enormous — hers is definitely not a job for a mere mortal,” said professor Dan Grossman, Vice Director of the Allen School. “But she built a strong team to help her get it done, and she is a phenomenal leader. People love working with her.”

After graduating from UW with a bachelor’s in psychology with a minor in education, Dolese Mandeville joined the Allen School’s undergraduate advising team in 2016 to assist students in charting their own educational journeys. She took a particular interest in transfer students and the unique challenges they face in acclimating to the UW, teaching a seminar designed to help ease the transition. She simultaneously worked on a transition of her own as she pursued a master’s degree in leadership in higher education. 

That degree would come in handy when, a mere two months after completing it, she took the reins of the school’s Diversity & Access program.

At the time, the Allen School’s undergraduate program had earned a national reputation for its success in recruiting and retaining women in computer science. But gender was the only area in which the school seemed to be making headway when it came to the breadth of students it serves. Not long before Dolese Mandeville assumed her present role, Jeff Dean (Ph.D., ‘96), Google Senior Fellow and Chief Scientist, and his wife Heidi Hopper approached school leaders with a challenge to extend the same energy and fervor they had devoted to growing the school’s gender diversity to other underrepresented groups. 

Dolese Mandeville embraced that challenge — and ran with it. Among her first priorities was morphing the school’s K-12 outreach programs from “broad and shallow” to “narrow and deep” by building substantive, sustainable partnerships with a set of schools and community organizations that directly served student populations the school was trying — and to that point, largely failing — to reach. With this new approach, the school soon surpassed the Seattle campus-wide average in the proportion of students who are Black or African American, from economically disadvantaged backgrounds, or among the first in their families to pursue a four-year degree. Previously, the share of the school’s students who identified with these groups was half, or less, than that of the campus as a whole.

“We are serving an increasingly diverse undergraduate population that is more reflective of the face of Washington and of technology users around the world,” said professor Ed Lazowska, Bill & Melinda Gates Chair Emeritus in Computer Science & Engineering in the Allen School. “Chloe has been instrumental in this remarkable transformation. We wouldn’t have made this progress without her.”

The notion of transformation comes up repeatedly in conversations about Dolese Mandeville’s impact. It is among the many superlatives offered by members of the Allen School’s undergraduate student services team who work alongside her every day.

“Chloe’s compassion, skill, talent and hard work have truly had a transformational effect on the Allen School student experience,” said Crystal Eney, director of undergraduate student services. “Chloe’s tenacity and creativity are among her greatest strengths, and the Diversity & Access team has risen leaps and bounds from where it started under her leadership.”

Another word that is mentioned in connection with Dolese Mandeville is “fierce” — but her peers are quick to point out that such fierceness is accompanied by compassion and kindness. And, they note, her leadership is paying dividends not only for UW but also for the broader field of computing.

“Chloe’s impact on the Allen School and computing is vast and unparalleled. Her leadership in building equitable, justice-oriented programs and systems while centering the student experience is one of Chloe’s greatest strengths.” observed Leslie Ikeda, who manages the Allen School Scholars Program, “If anyone can transform the work we are doing to support our field’s most vulnerable populations, it’s Chloe.”

A row of six people posed in a row with arms interlocking on a building rooftop with paving stones and river rock, with trees and brick buildings against a cloudy sky in the background. There is a curly-haired dog seated between the legs of the person in the middle.
The team that Chloe built (from left): Leslie Ikeda, Chloe Dolese Mandeville, Kayla Shuster Sasaki with Sailor Shuster Sasaki, Juliet Quebatay, EJ Pinera and Christina Huynh

The program Ikeda manages, formerly known as Allen School Startup, was initially conceived as an immersive, four-week summer experience to assist incoming first-year students who are first-generation, low-income and/or from underserved communities in their transition to college. It has since evolved under Dolese Mandeville’s direction into a comprehensive, year-long cohort-based program with wraparound support. The summer bridge course remains, but that is now accompanied by increased staff support, one-to-one mentorship, workshops that supplement students’ first-year coursework, a new study hall course and community-building events throughout the year.

“Our mission is to educate the next generation of outstanding computer scientists and computer engineers who reflect the diverse needs, backgrounds and experiences of people in society at large,” said Juliet Quebatay, senior program manager for K-12 outreach programs. “Chloe supports us all with time, energy, constructive feedback and a clear vision of where we want to go — all while creating realistic, sustainable collaborations and programming that will help the school get there.”

“Us all” is the close-knit team of full-time professional staff that Dolese Mandeville has assembled to execute on that vision. In addition to Quebatay and Ikeda, the team includes Christina Huynh, academic adviser for the Allen Scholars; Kayla Shuster Sasaki, who focuses on high school and transfer student recruiting, and EJ Pinera, who works directly with Allen School student groups such as Ability, Women in Computing (WiC), GEN1, Minorities in Tech (MiT) and Q++ — to name only a few. Like much of the school’s current DEIA-focused initiatives, those groups got their start with Dolese Mandeville’s encouragement.

“Chloe championed the importance of student groups in building community and a sense of belonging for all students in the Allen School,” Pinera said. 

Dolese Mandeville also championed a mentorship initiative alongside undergraduate students called Changemakers in Computing. CiC is a summer program for rising juniors and seniors in Washington state high schools interested in exploring technology and its intersection with society and justice. Through a combination of culturally relevant project-based learning and networking opportunities, the program empowers students from marginalized backgrounds to engage with computing as a potential career while building a community of future computer scientists and engineers who will be changemakers in the field. Importantly, CiC is completely free to participants; meals, public transportation to campus and all activities are covered by the program, as is an education stipend, to ensure that a lack of financial resources is no barrier to student participation. 

The program has grown from serving roughly 20 high school students when it was launched in 2021 to 40 students in the most recent cohort. Encouraging students to lead the way, as she did with CiC — and backing them up with the tools and resources that will help them to succeed — is characteristic of Dolese Mandeville’s approach.

“Chloe prioritizes student voices,” said Chelsea Navarro, senior academic adviser. “She takes actions big and small to ensure that students of all backgrounds feel that they belong and can thrive here.”

Those actions include teaming up with Pinera, Assistant Director of Advising Jenifer Hiigli and Senior Academic Adviser Rakeb Million to push for the creation of physical spaces in the school’s buildings that reflect its values around DEIA. Spaces such as the Diversity & Access student lounge and a dedicated prayer/meditation room offer places where students can support each other, share experiences and honor their whole selves.

In addition to taking concrete steps that contribute to a more welcoming and inclusive culture, Dolese Mandeville is also committed to setting the school up for success over the long haul.

“Chloe is amazing and an incredible asset to the Allen School. Our entire community — students, staff and faculty — benefit from her presence,” said professor Tadayoshi Kohno, the Allen School’s associate director for diversity, equity, inclusion and access. “In summer 2020 Chloe and I started working on a 5-year strategic plan to guide our DEIA work, and her vision, leadership and wisdom have been instrumental in getting us to where we are today.”

“Where we are today” is a testament to how effective Dolese Mandeville has been in helping the Allen School rise to the challenge issued by Hopper and Dean since she stepped into her role.

“I see firsthand, every day, the amount of energy, compassion and thought Chloe puts into building out our DEIA efforts,” said Hiigli. “The Allen School would absolutely not be the same if she had not been here building these programs over the past several years. 

“Chloe’s work has benefited thousands of computer science students in countless ways.”

Two of Dolese Mandeville’s Allen School colleagues were also among the nominees for 2023 Distinguished Staff Awards: Senior Academic Adviser Chelsea Navarro, in the individual impact category, and Robotics Lab Manager Selest Nashef, in the individual collaboration category.

Dolese Mandeville and her fellow honorees will be formally recognized at a campus ceremony on June 8.

Learn more about the UW Awards of Excellence here. Read more →

Perfect match(ing): Professor Thomas Rothvoss wins 2023 Gödel Prize for proving the exponential complexity of a core problem in combinatorial optimization

Portrait of Thomas Rothvoss smiling in a blue-green t-shirt with hazy blue sky and part of an old sand-colored building overlooking a city behind him.

University of Washington professor Thomas Rothvoss, a member of the Allen School’s Theory of Computation group with a joint appointment in the UW Department of Mathematics, has received the 2023 Gödel Prize recognizing outstanding papers in theoretical computer science for “The matching polytope has exponential extension complexity.” In the paper, Rothvoss proved that linear programming — a core technique in combinatorial optimization for modeling a large class of problems that are polynomial-time solvable  — cannot be used to solve the perfect matching problem in polynomial time. He originally presented these results at the 46th Association for Computing Machinery Symposium on Theory of Computing (STOC 2014).

Rothvoss shares this year’s accolade with researchers Samuel Fiorini and Serge Massar of the Université Libre de Bruxelles, Hans Raj Tiwary of Charles University in Prague, Sebastian Pokutta of the Zuse Institute Berlin and Technische Universität Berlin, and Ronald de Wolf of the Centrum Wiskunde & Informatica and the University of Amsterdam. Two years before Rothvoss published his seminal result, that team proved the extension complexity of the polytope for the Traveling Salesperson Problem is exponential — confirming that there is no polynomial-sized extended formulation, and therefore no small linear program, that can be used to solve the TSP. 

That result provided a partial, yet definitive, answer to a problem posed by theoretician Mihalis Yannakakis two decades prior. For Rothvoss and his colleagues in the tight-knit theory community, it was a watershed moment.

“Most of the time in complexity theory, we deal in conjectures but can’t actually prove any of them,” Rothvoss said. “On a good day, we can maybe prove that one conjecture implies another. So it was rather surprising when Sam and the rest of that group proved, completely unconditionally, the exponential extension complexity of the TSP polytope.”

The group further proved that its result extended to the maximum cut and stable-set polytopes, as well. But that proof, significant as it was, only answered the question for problems that are NP-hard. Inspired by the progress Fiorini and his collaborators had made, Rothvoss aimed to settle Yannakakis’ question once and for all when it comes to linear programs applied to polytopes that are not NP-hard — that is, well-understood polytopes, such as that of the perfect matching problem, for which polynomial-time algorithms for optimizing linear functions are known to exist. 

And settle it, he did.

“I focused on it full-time for half a year,” Rothvoss recalled. “A couple of the technical aspects of that 2012 paper were also useful for my purposes, such as a technique drawn from Razborov’s classic paper on communication complexity, while others I had to modify.

“In particular, we knew the so-called rectangle covering lower bound used by Fiorini et al. to great effect in the case of TSP would not suffice for the matching polytope,” he continued. “In fact, the rectangle cover number for matchings is polynomial in the number of vertices, so it turned out that a more general technique — hyperplane separation lower bound — works instead.”

In the process of arriving at his proof, Rothvoss confirmed that Edmonds’ characterization of the matching polytope, made nearly half a century earlier, is essentially optimal. According to Allen School professor James R. Lee, his colleague’s work was — and remains — a significant insight with ramifications in mathematics, algorithm design and operations research.

“Thomas’ work is a masterful combination of ideas from two seemingly disparate areas of TCS,” said Lee. “It’s the synthesis of really profound insights of Yannakakis and Razborov from three decades ago, weaving together polyhedral combinatorics and communication complexity to settle a problem that essentially predates the era of P vs. NP.”

Rothvoss previously received the Delbert Ray Fulkerson Prize from the Mathematical Optimization Society and the American Mathematical Society for the same work. He is also the past recipient of a Packard Fellowship, a Sloan Research Fellowship, a National Science Foundation CAREER Award and Best Paper Awards at STOC, the Symposium on Discrete Algorithms (SODA) organized by the ACM and the Society for Industrial and Applied Mathematics (SIAM), and the Conference on Integer Programming and Combinatorial Optimization (IPCO). 

The Gödel Prize, named for mathematical logician Kurt Gödel, is co-sponsored by the ACM Special Interest Group on Algorithms and Computation Theory (SIGACT) and the European Association for Theoretical Computer Science (EATCS). Rothvoss and his fellow honorees will be formally recognized at STOC 2023 in Orlando, Florida next month. Learn more about the Gödel Prize here. Read more →

Allen School’s Simon Du and Sewoong Oh to advance AI for responding to threats both natural and human-made as part of NSF-led National AI Research Institutes

Outline map of the United States with stars and dots on various locations indicating the presence of lead organizations or organizations with subawards under the National AI Research Institutes program, accompanied by the NSF logo.

Since 2020, communities around the globe have endured more than 1,100 natural disasters combined. From floods and drought to earthquakes and wildfires, these events contribute to human suffering and economic upheaval on a massive scale. So, too, do pandemics; since the emergence of SARS-CoV-2 at the end of 2019, nearly 7 million people have died from COVID-19

Then there is the human, economic and geopolitical toll caused by cyberattacks. While there is no way to know for certain, one oft-cited study estimated hackers launch “brute force” attacks against a computer once every 39 seconds, the equivalent of roughly 800,000 attacks per year. The fallout from malicious actors gaining unauthorized access to these and other systems — ranging from an individual’s laptop to a country’s electrical grid — is projected to cost as much as $10.5 trillion worldwide by 2025.

Whether natural or human-made, events requiring rapid, coordinated responses of varying complexity and scale could be could be addressed more efficiently and effectively with the help of artificial intelligence. That’s the thinking behind two new National Artificial Intelligence Research Institutes involving University of Washington researchers, including Allen School professors Simon Shaolei Du and Sewoong Oh, and funded by the National Science Foundation.

AI Institute for Societal Decision Making

Portrait of Simon Du in a dark blue-grey button-down shirt with blurred foliage in the background.

Allen School professor Simon Shaolei Du will contribute to the new AI Institute for Societal Decision Making (AI-SDM) led by Carnegie Mellon University. The institute will receive a total of $20 million over five years to develop a framework for applying artificial intelligence to improve decision making in public health or disaster management situations, when the level of uncertainty is high and every second counts, drawing on the expertise of researchers in computer science, social sciences and humanities along with industry leaders and educators.

“AI can be a powerful tool for alleviating the human burden of complex decision making while optimizing the use of available resources,” said Du. “But we currently lack a holistic approach for applying AI to modeling and managing such rapidly evolving situations.”

To tackle the problem, AI-SDM researchers will make progress on three key priorities to augment — not replace — human decision making, underpinned by fundamental advances in causal inference and counterfactual reasoning. These include developing computational representations of human decision-making processes, devising robust strategies for aggregating collective decision making, and building multi-objective and multi-agent tools for autonomous decision-making support. Du will focus on that third thrust, building on prior, foundational work in reinforcement learning (RL) with long-time collaborators Aarti Singh, professor at CMU who will serve as director of the new institute, and Allen School affiliate professor Sham Kakade, a faculty member at Harvard University, along with CMU professors Jeff Schneider and Hoda Heidari.

Adapting RL to dynamic environments like that of public health or disaster management poses a significant challenge. At present, RL tends to be most successful when applied in data-rich settings involving single-agent decision making and using a standard reward-maximization approach. But when it comes to earthquakes, wildfires or novel pathogens, the response is anything but straightforward; the response may span multiple agencies and jurisdictions, the sources of data will not have been standardized, and each incident response will unfold in an unpredictable, situation-dependent manner. Compounding the problem, multi-agent decision making algorithms have typically performed best in scenarios where both planning and execution are centralized — an impossibility in the evolving and fragmented response to a public health threat or natural or human-made disaster, where the number of actors may be unknown and communications may be unreliable. 

Du and his colleagues will develop data-efficient multi-agent RL algorithms capable of integrating techniques from various sources while satisfying multiple objectives informed by collective social values. They will also explore methods for leveraging common information while reducing sample complexity to support effective multi-agent coordination under uncertainty.

But the algorithms will only work if humans are willing to use them. To that end, Du and his collaborators will design graduate-level curriculum in human-AI cooperation and work through programs such as the Allen School’s Changemakers in Computing program to engage students from diverse backgrounds — just a couple of examples of how AI-SDM partners plan to cultivate both an educated workforce and an informed public.

“There is the technical challenge, of course, but there is also an educational and social science component. We can’t develop these tools in a vacuum,” Du noted. “Our framework has to incorporate the needs and perspectives of diverse stakeholders — from elected officials and agency heads, to first responders, to the general public. And ultimately, our success will depend on expanding people’s understanding and acceptance of these tools.”

In addition to CMU and the UW, partners on the AI-SDM include Harvard University, Boston Children’s Hospital, Howard University, Penn State University, Texas A&M University, the University of Washington, the MITRE Corporation, Navajo Technical University and Winchester Thurston School. Read the CMU announcement here.

AI Institute for Agent-based Cyber Threat Intelligence and Operation

Portrait of Sewoong Oh wearing eyeglasses with thin, round dark frames and a black t-shirt against a warmly lit building interior.

Allen School professor Sewoong Oh and UW lead Radha Poovendran, a professor in the Department of Electrical & Computer Engineering, will contribute to the new AI Institute for Agent-based Cyber Threat Intelligence and OperatioN (ACTION). Spearheaded by the University of California, Santa Barbara, the ACTION Institute will receive $20 million over five years to develop a comprehensive AI stack to reason about and respond to ransomware, zero-day exploits and other categories of cyberattacks. 

”Attackers and their tactics are constantly evolving, so our defenses have to evolve along with them,” Oh said. “By taking a more holistic approach that integrates AI into the entire cyberdefense life cycle, we can give human security experts an edge by rapidly responding to emerging threats and make systems more resilient over time.”

The complexity of those threats, which can compromise systems while simultaneously evading measures designed to detect intrusion, calls for a new paradigm built around the concept of stacked security. To get ahead of malicious mischief-makers, the ACTION Institute will advance foundational research in learning and reasoning with domain knowledge, human-agent interaction, multi-agent collaboration, and strategic gaming and tactical planning. This comprehensive AI stack will be the foundation for developing new intelligent security agents that would work in tandem with human experts on threat assessment, detection, attribution, and response and recovery.

Oh will work alongside Poovendran on the development of intelligent agents for threat detection that are capable of identifying complex, multi-step attacks and contextualizing and triaging alerts to human experts for follow-up. Such attacks are particularly challenging to identify because they require agents to sense and reason about correlating events that span multiple domains, time scales and abstraction levels — scenarios for which high-quality training data may be scarce. Errors or omissions in the data can lead agents to generate a lot of false positives, or conversely, miss legitimate attacks altogether. 

Recent research using deep neural networks to detect simple backdoor attacks offers clues for how to mitigate these shortcomings. When a model is trained on data that includes maliciously corrupted examples, small changes in the input can lead to erroneous predictions. Training representations of the model on corrupted data is an effective technique for identifying such examples, as the latter leave traces of their presence in the form of spectral signatures. Those traces are often small enough to escape detection, but state-of-the-art statistical tools from robust estimation can be used to boost their signal. Oh will apply this same method to time series over a network of agents to enable the detection of outliers that point to potential attacks in more complex security scenarios.

Oh and Poovendran’s collaborators include professors João Hespanha, Christopher Kruegel and Giovanni Vigna at UCSB, Elisa Bertino, Berkay Celik and Ninghui Li at Purdue University, Nick Feamster at the University of Chicago, Dawn Song at the University of California, Berkeley and Gang Wang at the University of Illinois at Urbana-Champaign. The group’s work will complement Poovendran’s research into novel game theoretic approaches for modeling adversarial behavior and training intelligent agents in decision making and dynamic planning in uncertain environments — environments where the rules of engagement, and the intentions and capabilities of the players, are constantly in flux. It’s an example of one of the core ideas behind the ACTION Institute’s approach: equipping AI agents to be “lifelong learners” capable of continuously improving their domain knowledge, and with it, their ability to adapt in the face of novel attacks. The team is keen to also develop a framework that will ensure humans continue to learn right along with them.

“One of the ways this and other AI Institutes have a lasting impact is through the education and mentorship that go hand in hand with our research,” said Oh, who is also a member of the previously announced National AI Institute for Foundations in Machine Learning (IMFL). “We’re committed not just to advancing new AI security tools, but also to training a new generation of talent who will take those tools to the next level.”

In addition to UCSB and the UW, partners on the ACTION Institute include Georgia Tech, University of California, Berkeley, Norfolk State University, Purdue University, Rutgers University, University of Chicago, University of Illinois Chicago, University of Illinois Urbana-Champaign and University of Virginia. Read the UCSB announcement here and a related UW ECE story here.

The ACTION Institute and AI-SDM are among seven new AI Institutes announced earlier this month with a combined $140 million from the NSF, its federal agency partners and industry partner IBM. Read the NSF announcement here.

Read more →

Allen School team earns NSDI Test of Time Award for research into how third-party trackers “badger” people online

Tadayoshi Kohno and Franziska Roesner smiling and standing side by side, hands clasped in front of them, against a wall painted with visible brush strokes in shades of blue, both wearing lanyards with NSDI name tags around their necks. Kohno is wearing a grey zip-up sweatshirt over a purple t-shirt, and Roesner is wearing a blue floral-patterned blouse with the sleeves rolled up and a smartwatch with a blue wristband.
Tadayoshi Kohno (left) and Franziska Roesner at NSDI 2023. Photo by Liz Markel, courtesy of USENIX

There was a time when cookies were considered something to be savored — back when chips referred to chocolate rather than silicon. Once “cookies” became synonymous with online tracking, privacy researchers weren’t so sweet on the concept. 

That includes Allen School professors Franziska Roesner and Tadayoshi Kohno, who investigated the online tracking ecosystem for their 2012 paper “Detecting and Defending Against Third-Party Tracking on the Web.” Last month, Roesner, Kohno and co-author David Wetherall, a former Allen School professor who is now a Distinguished Engineer at Google, received the Test of Time Award at the 20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 2023) for their influential work, which offered the first comprehensive evaluation of third-party trackers and their intrusion into people’s activities online. 

The team’s findings informed the nascent policy debate around web privacy that has become all the more relevant with the proliferation of social media and reliance on targeted advertising as a revenue model. They also led to the creation of new tools like Privacy Badger, a browser extension that learns and automatically blocks hidden third-party trackers used by millions of people to protect themselves and their browsing histories online. The work also inspired a significant body of follow-on research, including team members’ subsequent paper that appeared at NSDI 2016 chronicling the increase in both the prevalence of online tracking and the complexity of tracker behavior over time.

“Considering how much time we spend online and the variety of activities we engage in, this type of tracking can yield a lot of information about a person,” said Roesner, a co-director of the Security and Privacy Research Lab at the University of Washington along with Kohno. “That’s even truer today than it was a decade ago, and I’m gratified that our work helped initiate such an important conversation and informed efforts to educate and empower users.”

At the time of the original paper’s release, third-party tracking had started to gain attention in security and privacy circles. But researchers were just nibbling around the edges, for the most part; they had a fragmented understanding of how such trackers worked and their impact on people’s online experience. Roesner — an Allen School Ph.D. student at the time — worked with Kohno and Wetherall to develop a client-side method for detecting and classifying trackers according to how they interact with the browser. They analyzed tracker prevalence and behavior on the top 500 website domains, as identified by the now-defunct web traffic analysis firm Alexa Internet, examining more than 2,000 unique pages.

“We identified 524 unique trackers, some of which had sufficient penetration across popular websites to enable them to capture a significant fraction of a user’s browsing activity — typically around 20%, and in one case, as much as 66%,” Roesner recalled.

Roesner and her colleagues cataloged five types of tracker behavior, varying from the relatively benign, to the opportunistic, to the infuriating. The behaviors spanned analytics that are generally confined to a specific site, Google Analytics being an example; “vanilla” trackers, which rely on third-party storage to track users across sites for the purposes of additional analytics or targeted advertising, such as Doubleclick; forced, which include the dreaded popup or redirect that compels the user to visit its domain; referred, which rely on unique identifiers leaked by other trackers; and personal trackers, which engage in cross-site tracking based on a user’s voluntary visit to its domain in other contexts. Some trackers exhibit a combination of the above.

Despite the existence of multiple tools intended to give users more control, from third-party cookie blockers to “private” browsing mode, the team found those options insufficient for preventing certain trackers from following people across the web while maintaining any semblance of functionality. This was particularly true for popular social widgets by the likes of Facebook, Twitter, LinkedIn, Digg, and others that were embedded on a growing number of sites ranging from news outlets to online storefronts.

Portrait of David Wetherall against a dark building interior, smiling and wearing wireframe glasses and a black zip-up top over a lavender collared shirt.
David Wetherall

“While users could prevent some tracking, that was not the case for social widgets,” noted Roesner. “If a user was logged into a social media site like Facebook, for instance, their activity elsewhere on the web would be tracked — non-anonymously, I would add — even if they didn’t interact with the ‘like’ button embedded on those sites.”

For those who would prefer to cover their tracks while continuing to enjoy the convenience of interacting with social widgets on their terms, Roesner and her collaborators developed ShareMeNot. The browser extension took a bite out of social widgets’ ability to construct browsing profiles of users by only allowing activation of third-party tracking cookies when a user explicitly interacted with the “like,” “share,” or other relevant buttons; if a user visited a site but did not click on the social widgets, ShareMeNot stripped the cookies from any third-party requests to those trackers.

The team worked with an undergraduate research assistant in the lab, Chris Rovillos (B.S., ‘14) to refine ShareMeNot following the paper’s initial publication and address instances of the trackers attempting to circumvent the restrictions on cookies via other means. Instead of just blocking cookies, the new and improved version of the tool blocked tracker buttons altogether. In their place, ShareMeNot inserted local, stand-in versions of the buttons that users could click to either “like” a page directly or load the real button — putting users, not the trackers, in control. Roesner partnered with the nonprofit Electronic Frontier Foundation to incorporate ShareMeNot into the previously mentioned Privacy Badger, which remains an important tool for protecting users from intrusion by third-party trackers to this day.

The team’s work is notable for inspiring not only new technologies but also a new wave of researchers to focus on web tracking. One of those researchers, Umar Iqbal, followed that inspiration all the way to the Allen School.

“This is one of the seminal works in the space of web privacy and security. It had an immense influence on the community, including my own research,” observed Iqbar, a postdoc in the Security and Privacy Research Lab. “I extended several of the techniques proposed in the paper as part of my own doctoral thesis, from the measurement of online trackers, to their characterization, to building defenses. It was, in fact, one of the reasons I decided to pursue a postdoc with Franzi at UW!”

Roesner, Kohno and Wetherall were formally recognized at NSDI 2023 last month in Boston, Massachusetts. Read the research paper here.

Read more →

Professors Su-In Lee and Sara Mostafavi awarded CZI Data Insights grants to advance explainable AI for biomedical research

Portrait of Su-In Lee seated at a table in front of a white board, wearing glasses and a black suit and looking off to the viewer's left (her right), holding a pen in her right hand with her elbow on the table and her left hand around a purple and white mug. A second pen and paper is visible lying flat on the table in front of the open laptop, and the corner of a second laptop is just visible in the right of the frame.
Su-In Lee (Credit: Mark Stone/University of Washington)

Single-cell genomics is revolutionizing biomedical research by enabling high-volume analysis of gene expression at the cellular level to understand the origins of disease and identify targets for potential treatment. To accelerate this progress, researchers are increasingly turning their attention to artificial intelligence (AI) tools to analyze these connections at scale. But the size and complexity of the resulting datasets, combined with noise and systematic biases in experimentation, make it difficult to build meaningful AI models from which to derive new biological insights.

Professors Su-In Lee and Sara Mostafavi of the Allen School’s Computational Biology group are working on new solutions to the problem, supported by two competitive grants from the Chan Zuckerberg Initiative’s (CZI) Data Insights program. The program supports the advancement of tools and resources that make it possible to gain greater insights into health and disease from single-cell biology datasets.

Lee directs the University of Washington’s AIMS Lab, shorthand for AI for bioMedical Sciences, where she and her collaborators develop explainable AI techniques for lifting the so-called black box on models to make them more transparent and interpretable in biomedical sciences and clinical settings. Newer deep neural network architectures used in single-cell genomics, such as transformers and graph neural networks (GNNs), are ripe for such tools. While they have been used to good effect by researchers investigating the mechanisms of gene regulation and cell identity in complex tissues across multiple single-cell datasets, how they arrive at their results remains shrouded in mystery. 

The CZI Data Insights grant will support a project led by Lee, working in collaboration with professor Jian Ma at Carnegie Mellon University, to fill that void by extending principled XAI methods, such as a new framework for computing Shapley values using a learned explainer model, to transformers and GNNs. The results will enable researchers to understand which features contributed to the models’ predictions — and to what extent.

Portrait of Sara Mostafavi posed in a grey cardigan open over a white button-down shirt and glasses, looking at the camera, in a building atrium with a metal and concrete elevator bank visible behind one shoulder and artwork on a white wall over the other, with white track lighting overhead.
Sara Mostafavi (Credit: Matt Hagen)

“There is an urgent need for new, explainable AI techniques that can be applied to complex neural network architectures,” said Lee. “This approach will enable researchers to rigorously interpret these models to enable data-driven biological discoveries in single-cell regulatory genomics for which a “wave” of new datasets is expected and enhance our fundamental understanding of how a cell works.” 

A second CZI-funded project led by Mostafavi, working in collaboration with Lee, will support her efforts to develop methods for predicting how cells respond differently to various environmental factors. This direction extends Mostafavi’s previous research into the use of deep neural networks to predict when and how genetic variation between people leads to differences in disease susceptibility.

“Combining recent advances in AI with emerging single-cell datasets is a promising approach for understanding the role of genetic determinants of heritable diseases such as Alzheimer’s and cancer in rare or previously unknown cell populations,” explained Mostafavi, who is principal investigator on the project. “But we need to address issues of accuracy, scalability, and interpretability in the models in order to gain meaningful biological insights.”

Mostafavi and Lee’s awards are among three earned by University of Washington researchers in this latest cycle of CZI Data Insights grants. Allen School adjunct professor William Noble, professor of genome sciences at the UW, is part of a project to develop new computational methods that will significantly improve the quantitative accuracy of single-cell proteomics data.

Learn more about the CZI Data Insights grantees here. Read more →

Researchers unveil BioTranslator, a machine learning model that bridges biological data and text to accelerate biomedical discovery

Dense swirls and plumes of brightly colored cellular material in blue, green, purple, orange and red form an irregular mass in the center of the frame. Overlaid on the red portion is a section of a chain of hexagonal shapes in red and blue representing an enzyme, highlighted in white with red circles radiating out from the center. The cellular material is pictured against a grey background patterned with tiny floating matter.
A visualization of p97, an enzyme that plays a crucial role in regulating proteins in cancer cells, inhibited from completing its normal reaction cycle by a potential small molecule drug. With BioTranslator, the first multilingual translation framework for biomedical research, scientists will be able to search potential drug targets like p97 and other non-text biological data using free-form text descriptions. National Cancer Institute, National Institutes of Health

When the novel coronavirus SARS-Cov-2 began sweeping across the globe, scientists raced to figure out how the virus infected human cells so they could halt the spread. 

What if scientist had been able to simply type a description of the virus and its spike protein into a search bar, and received information on the angiotensin-converting enzyme 2 — colloquially known as the ACE2 receptor, through which the virus infects human cells — in return? And what if, in addition to identifying the mechanism of infection for similar proteins, this same search also returned potential drug candidates that are known to inhibit their ability to bind to the ACE2 receptor?

Biomedical research has yielded troves of data on protein function, cell types, gene expression and drug formulas that hold tremendous promise for assisting scientists in responding to novel diseases as well as fighting old foes such as Alzheimer’s, cancer and Parkinson’s. Historically, their ability to explore these massive datasets has been hampered by an outmoded model that relied on painstakingly annotated data, unique to each dataset, that precludes more open-ended exploration.

But that may be about to change. In a recent paper published in Nature Communications, Allen School researchers and their collaborators at Microsoft and Stanford University unveiled BioTranslator, the first multilingual translation framework for biomedical research. BioTranslator — a portmanteau of “biological” and “translator” — is a state-of-the-art, zero-shot classification tool for retrieving non-text biological data using free-form text descriptions.  

Portrait of Hanwen Xu wearing glasses and a dark button-down shirt open over a white t-shirt, with a neutral expression and standing outdoors with blurred green and pink foliage behind him. The sun illuminates the right side of his face (left side from the viewer's perspective), side by side with a portrait of Addie Woicik outdoors on a snow-covered glacier, with hair pulled back and sunglasses perched on her head. She is wearing a periwinkle scarf around her neck and a pale red t-shirt with the straps of her backpack visible.
Hanwen Xu (left) and Addie Woicik

“BioTranslator serves as a bridge connecting the various datasets and the biological modalities they contain together,” explained lead author Hanwen Xu, a Ph.D. student in the Allen School. “If you think about how people who speak different languages communicate, they need to translate to a common language to talk to each other. We borrowed this idea to create our model that can ‘talk’ to different biological data and translate them into a common language — in this case, text.”

The ability to perform text-based search across multiple biological databases breaks from conventional approaches that rely on controlled vocabularies (CVs). As the name implies, CVs come with some constraints. Once the original dataset is created via the painstaking process of manual or automatic annotation according to a predefined set of terms, it is difficult to extend them to the analysis of new findings; meanwhile, the creation of new CVs is time consuming and requires extensive domain knowledge to compose the data descriptions.

BioTranslator frees scientists from this rigidity by enabling them to search and retrieve biological data with the ease of free-form text. Allen School professor Sheng Wang, senior author of the paper, likens the shift to when the act of finding information online progressed from combing through predefined directories to being able to enter a search term into open-ended search engines like Google and Bing.

Portrait of Sheng Wang wearing glasses and a navy blue suit jacket over a pink button-down shirt, standing in front of a window on a high floor overlooking low-rise buildings surrounded by trees with a mountain range barely visible in the background.
Sheng Wang

“The old Yahoo! directories relied on these hierarchical categories like ‘education,’ ‘health,’ ‘entertainment’ and so on. That meant that If I wanted to find something online 20 years ago, I couldn’t just enter search terms for anything I wanted; I had to know where to look,” said Wang. “Google changed that by introducing the concept of an intermediate layer that enables me to enter free text in its search bar and retrieve any website that matches my text. BioTranslator acts as that intermediate layer, but instead of websites, it retrieves biological data.”

Wang and Xu previously explored text-based search of biological data by developing ProTranslator, a bilingual framework for translating text to protein function. While ProTranslator is limited to proteins, BioTranslator is domain-agnostic, meaning it can pull from multiple modalities in response to a text-based input — and, as with the switch from old-school directories to modern search engines, the person querying the data no longer has to know where to look.

BioTranslator does not merely perform similarity search on existing CVs using text-based semantics; instead, it translates the user-generated text description into a biological data instance, such as a protein sequence, and then searches for similar instances across biological datasets. The framework is based on large-scale pretrained language models that have been fine-tuned using biomedical ontologies from a variety of related domains. Unlike other language models that are having a moment — ChatGPT comes to mind — BioTranslator isn’t limited to searching text but rather can pull from various data structures, including sequences, vectors and graphs. And because it’s bidirectional, BioTranslator not only can take text as input, but also generate text as output.

“Once BioTranslator converts the biological data to text, people can then plug that description into ChatGPT or a general search engine to find more information on the topic,” Xu noted.

A diagram from the paper illustrating how BioTranslator converts Input: user-written text to Output: non-text biological data. On the left are three examples of text descriptions, fed through BioTranslator symbolized by a collection of circles illuminated in the center and connected to each other by lines, and on the right are the corresponding biological data instances. A cell found in the embryo before the formation of all the gem layers is complete returns gene expression data in the form of a row of boxes of varying shades of maroon, pink and lavender; The removal of sugar residues from a glycosylated protein returns a protein sequence SVLLRSGLGPLCAARAA….VVAGFELAWQ; A complex network of interacting proteins and enzymes is required for DNA replication returns a pathway illustrated by a collection of 11 circles connected to one or more of the other circles by lines.
BioTranslator functions as an intermediate layer between written text descriptions and biological data. The framework, which is based on large-scale pretrained language models that have been refined using biological ontologies from a variety of domains, translates user-generated text into a non-text biological data instance — for example, a protein sequence — and searches for similar instances across multiple biological datasets. Nature Communications

Xu and his colleagues developed BioTranslator using an unsupervised learning approach. Part of what makes BioTranslator unique is its ability to make predictions across multiple biological modalities without the benefit of paired data.

“We assessed BioTranslator’s performance on a selection of prediction tasks, spanning drug-target interaction, phenotype-gene association and phenotype-pathway association,” explained co-author and Allen School Ph.D. student Addie Woicik. “BioTranslator was able to predict the target gene for a drug using only the biological features of the drugs and phenotypes — no corresponding text descriptions — and without access to paired data between two of the non-text modalities. This sets it apart from supervised approaches like multiclass classification and logistic regression, which require paired data in training.”

BioTranslator outperformed both of those approaches in two out of the four tasks, and was better than the supervised approach that doesn’t use class features in the remaining two. In the team’s experiments, BioTranslator also successfully classified novel cell types and identified marker genes that were omitted from the training data. This indicates that BioTranslator can not only draw information from new or expanded datasets — no additional annotation or training required — but also contribute to the expansion of those datasets.

Portrait of Haifung Poon wearing a blue button-down shirt against a grey background side by side with a portrait of Russ Altman wearing a blue and white striped button-down shirt against a blue sky.
Hoifung Poon (left) and Dr. Russ Altman

“The number of potential text and biological data pairings is approaching one million and counting,” Wang said. “BioTranslator has the potential to enhance scientists’ ability to respond quickly to the next novel virus, pinpoint the genetic markers for diseases, and identify new drug candidates for treating those diseases.”

Other co-authors on the paper are Allen School alum Hoifung Poon (Ph.D., ‘11), general manager at Microsoft Health Futures, and Dr. Russ Altman, the Kenneth Fong Professor of Bioengineering, Genetics, Medicine and Biomedical Data Science, with a courtesy appointment in Computer Science, at Stanford University. Next steps for the team include expanding the model beyond expertly written descriptions to accommodate more plain language and noisy text.

Read the Nature Communications paper here, and access the BioTranslator code package here. Read more →

UW researchers show how to tap into the sensing capabilities of any smartphone to screen for prediabetes

A person holds a black smartphone with the rear of the phone facing the camera in their left hand, and a narrow rectangular glucose test strip with various tiny circuitry attached in the other hand. Only the person's hands and wrists are visible in the frame. The shot is professionally lit against a dark grey, almost black, background.
GlucoScreen would enable people to self-screen for prediabetes using a modified version of a commercially available test strip with any smartphone — no separate glucometer required. Leveraging the phone’s built-in capacitive touch sensing capabilities, GlucoScreen transmits test data from the strip to the phone via a series of simulated taps on the screen. The app applies machine learning to analyze the data and calculate a blood glucose reading. Raymond C. Smith/University of Washington

According to the U.S. Centers for Disease Control, one out of every three adults in the United States have prediabetes, a condition marked by elevated blood sugar levels that could lead to the development of type 2 diabetes. The good news is that, if detected early, prediabetes can be reversed through lifestyle changes such as improved diet and exercise. The bad news? Eight out of 10 Americans with prediabetes don’t know that they have it, putting them at increased risk of developing diabetes as well as disease complications that include heart disease, kidney failure and vision loss.

Current screening methods typically involve a visit to a health care facility for laboratory testing and/or the use of a portable glucometer for at-home testing, meaning access and cost may be barriers to more widespread screening. But researchers at the University of Washington’s Paul G. Allen School of Computer Science & Engineering and UW Medicine may have found the sweet spot when it comes to increasing early detection of prediabetes. They developed GlucoScreen, a new system that leverages the capacitive touch sensing capabilities of any smartphone to measure blood glucose levels without the need for a separate reader. Their approach will make glucose testing less costly and more accessible — particularly for one-time screening of a large population. 

The team describes GlucoScreen in a new paper published in the latest issue of the Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT).

“In conventional screening, a person applies a drop of blood to a test strip, where the blood reacts chemically with the enzymes on the strip. A glucometer is used to analyze that reaction and deliver a blood glucose reading,” explained lead author Anandghan Waghmare, a Ph.D. student in the Allen School’s UbiComp Lab. “We took the same test strip and added inexpensive circuitry that communicates data generated by that reaction to any smartphone through simulated tapping on the screen. GlucoScreen then processes the data and displays the result right on the phone, alerting the person if they are at risk so they know to follow up with their physician.”

The GlucoScreen test strip samples the electrochemical reaction induced by the mixing of blood and enzymes as an amplitude along a curve at a rate of five times per second. The strip transmits this curve data to the phone encoded in a series of touches at variable speeds using a technique called pulse width modulation. “Pulse width” refers to the distance between peaks in the signal — in this case, the length between taps. Each pulse width represents a value along the curve; the greater the distance between taps for a particular value, the higher the amplitude associated with the electrochemical reaction on the strip.

Closeup of a person conducting a glucose test by applying blood from their finger to the biosensor attached to the GlucoScreen test strip, as seen from the side. The strip is folded in half over the top of the smartphone, with tiny photodiodes and circuitry facing the flash, which is illuminated, on the rear of the phone and one end of the strip affixed to the upper third of the phone's front touch screen.
The GlucoScreen app walks the user through each step of the testing process, which is similar to a conventional glucometer-based test. Tiny photodiodes on the GlucoScreen test strip enable it to draw the power it needs to function entirely from the phone’s flash. (Note: The blood in the photo is not real.) Raymond C. Smith/University of Washington

“You communicate with your phone by tapping the screen with your finger,” said Waghmare. “That’s basically what the strip is doing, only instead of a single tap to produce a single action, it’s doing multiple taps at varying speeds. It’s comparable to how Morse code transmits information through tapping patterns.” 

The advantage of this technique is that it does not require complicated electronic components, which minimizes the cost to manufacture the strip and the power required for it to operate compared to more conventional communication methods like Bluetooth and WiFi. All of the data processing and computation occurs on the phone, which simplifies the strip and further reduces the cost.

“The test strip doesn’t require batteries or a USB connection,” noted co-author Farshid Salemi Parizi, a former Ph.D. student in the UW Department of Electrical & Computer Engineering who is now a senior machine learning engineer at OctoML. “Instead, we incorporated photodiodes into our design so that the strip can draw what little power it needs for operation from the phone’s flash.”

The flash is automatically engaged by the GlucoScreen app, which walks the user through each step of the testing process. First, a user affixes each end of the test strip to the front and back of the phone as directed. Next, they prick their finger with a lancet, as they would in a conventional test, and apply a drop of blood to the biosensor attached to the test strip. After the data is transmitted from the strip to the phone, the app applies machine learning to analyze the data and calculate a blood glucose reading.

That stage of the process is similar to that performed on a commercial glucometer. What sets GlucoScreen apart, in addition to its novel touch technique, is its universality.

“Because we use the built-in capacitive touch screen that’s present in every smartphone, our solution can be easily adapted for widespread use. Additionally, our approach does not require low-level access to the capacitive touch data, so you don’t have to access the operating system to make GlucoScreen work.” explained co-author Jason Hoffman, a Ph.D. student in the Allen School. “We’ve designed it to be ‘plug and play.’ You don’t need to root the phone — in fact, you don’t need to do anything with the phone, other than install the app. Whatever model you have, it will work off the shelf.”

A smartphone with a glucose test strip affixed to the front and rear, with a biosensor and strip for applying a drop of blood sticking out above the phone's top edge. The phone's touch screen is displayed, with the end of the test strip that comes up over the top edge of the phone affixed to the upper third of the screen, which is blank except for a pale grey. The rest of the screen is white with text: Your glucose level is 91 mg/dl, a text link: Learn more about what this number means, and a blue button labeled: Finish.
After processing the data from the test strip, GlucoScreen displays the calculated blood glucose reading on the phone. Raymond C. Smith/University of Washington

Hoffman and his colleagues evaluated their approach using a combination of in vitro and clinical testing. Due to the COVID-19 pandemic, they had to delay the latter until 2021 when, on a trip home to India, Waghmare connected with Dr. Shailesh Pitale at Dew Medicare and Trinity Hospital. Upon learning about the UW project, Dr. Pitale agreed to facilitate a clinical study involving 75 consenting patients who were already scheduled to have blood drawn for a laboratory blood glucose test. Using that laboratory test as the ground truth, Waghmare and the team evaluated GlucoScreen’s performance against that of a conventional strip and glucometer. 

While the researchers stress that additional testing is needed, their early results suggest GlucoScreen’s accuracy is comparable to that of glucometer testing. Importantly, the system was shown to be accurate at the crucial threshold between a normal blood glucose level at or below 99 mg/dL, and prediabetes, defined as a blood glucose level between 100 and 125 mg/dL. Given the scarcity of training data they had to work with for the clinical testing model, the researchers posit that GlucoScreen’s performance will improve with more inputs.

According to co-author Dr. Matthew Thompson, given how common prediabetes as well as diabetes are globally, this type of technology has the potential to change clinical care. 

“One of the barriers I see in my clinical practice is that many patients can’t afford to test themselves, as glucometers and their test strips are too expensive. And, it’s usually the people who most need their glucose tested who face the biggest barriers,” said Thompson, a family physician and professor in the UW Department of Family Medicine and Department of Global Health. “Given how many of my patients use smartphones now, a system like GlucoScreen could really transform our ability to screen and monitor people with prediabetes and even diabetes.”

GlucoScreen is presently a research prototype; additional user-focused and clinical studies, along with alterations to how test strips are manufactured and packaged, would be required before the system could be made widely available. According to senior author Shwetak Patel, the Washington Research Foundation Entrepreneurship Endowed Professor in Computer Science & Engineering and Electrical & Computer Engineering at the UW, the project demonstrates how we have only begun to tap into the potential of smartphones as a health screening tool.

“Now that we’ve shown we can build electrochemical assays that can work with a smartphone instead of a dedicated reader, you can imagine extending this approach to expand screening for other conditions,” Patel said.

Yuntao Wang, a research professor at Tsinghua University and former visiting professor at the Allen School, is also a co-author of the paper. This research was funded in part by the Bill & Melinda Gates Foundation.

Read more →

With HAILEY, researchers demonstrate how AI can lend a helping hand for mental health support

Sometimes it can be hard to find just the right words to help someone who is struggling with mental health challenges. But recent advances in artificial intelligence could soon mean that assistance is just a click away — and delivered in a way that enhances, not replaces, the human touch. 

In a new paper published in Nature Machine Intelligence, a team of computer scientists and psychologists at the University of Washington and Stanford University led by Allen School professor Tim Althoff present HAILEY, a collaborative AI agent that facilitates increased empathy in online mental health support conversations. HAILEY — short for Human-AI coLlaboration approach for EmpathY — is designed to assist peer supporters who are not trained therapists by providing just-in-time feedback on how to increase the empathic quality of their responses to support seekers in text-based chat. The goal is to achieve better outcomes for people who look to a community of peers for support in addition to, or in the absence of, access to licensed mental health providers.

Side-by-side portraits of Ashish Sharma and Inna Lin. Sharma is wearing a dark blue suit with light blue shirt and striped tie, pictured in front of a green leafy background. Lin is wearing a lavender striped button-down shirt with dark-rimmed eyeglasses perched atop her head, standing in front of a sun-dappled shoreline.
Ashish Sharma (left) and Inna Lin

“Peer-to-peer support platforms like Reddit and TalkLife enable people to connect with others and receive support when they are unable to find a therapist, or they can’t afford it, or they’re wary of the unfortunate stigma around seeking treatment for mental health,” explained lead author Ashish Sharma, a Ph.D. student in the Allen School’s Behavioral Data Science Lab. “We know that greater empathy in mental health conversations increases the likelihood of relationship-forming and leads to more positive outcomes. But when we analyzed the empathy in conversations taking place on these platforms on a scale of zero for low empathy to six for high empathy, we found that they averaged an expressed empathy level of just one. So we worked with mental health professionals to transform this very complex construct of empathy into computational methods for helping people to have more empathic conversations.” 

HAILEY is different from a general-purpose chatbot like ChatGPT. As a human-AI collaboration agent, HAILEY harnesses the power of large language models specifically to assist users in crafting more empathic responses to people seeking support. The system offers users just-in-time, actionable feedback in the form of onscreen prompts suggesting the insertion of new empathic sentences to supplement existing text or the replacement of low-empathy sentences with more empathic options. In one example cited in the paper, HAILEY suggests replacing the statement “Don’t worry!” with the more empathic acknowledgment, “It must be a real struggle!” In the course of conversation, the human user can choose to incorporate HAILEY’s suggestions with the touch of a button, modify the suggested text to put it in their own words and obtain additional feedback. 

Unlike a chatbot that actively learns from its online interactions and incorporates those lessons in their subsequent exchanges, HAILEY is a closed system, meaning all training occurs offline. According to co-author David Atkins, CEO of Lyssn.io, Inc. and an affiliate professor in the UW Department of Psychiatry and Behavioral Sciences, HAILEY avoids the potential pitfalls associated with other AI systems that have recently made headlines.

“When it comes to delivering mental health support, we are dealing with open-ended questions and complex human emotions. It’s critically important to be thoughtful in how we deploy technology for mental health,” explained Atkins. “In the present work, that’s why we focused first on developing a model for empathy, rigorously evaluated it, and only then did we deploy it in a controlled environment. As a result, HAILEY represents a very different approach from just asking a generic, generative AI model to provide responses.”

Side-by-side portraits of Adam Miner and David Atkins. Miner is wearing a periwinkle and fuschia striped button-down shirt against a solid black background. Atkins is wearing a deep blue button-down shirt with tiny white polka dots, standing in front of a leafy green background
Adam Miner (left) and David Atkins

HAILEY builds upon the team’s earlier work on PARTNER, a model trained on a new task of empathic rewriting using deep reinforcement learning. The project, which represented the team’s first foray into the application of AI to increase empathy in online mental health conversations while maintaining conversational fluency, contextual specificity, and diversity of responses, earned a Best Paper Award at The Web Conference (WWW 2021).

The team evaluated HAILEY in a controlled, non-clinical study involving 300 peer supporters who participate in TalkLife, an online peer-to-peer mental health support platform with a global reach. The study was conducted off-platform to preserve users’ safety via an interface similar to TalkLife’s, and participants were given basic training in crafting empathic responses to enable the researchers to better gauge the effect of HAILEY’s just-in-time feedback versus more traditional feedback or training. 

The peer supporters were split into two groups: a human-only control group that crafted responses without feedback, and a “treatment” group in which the human writers received feedback from HAILEY. Each participant was asked to craft responses to a unique set of 10 posts by people seeking support. The researchers evaluated the levels of empathy expressed in the results using both human and automated methods. The human evaluators — all TalkLife users — rated the responses generated by human-AI collaboration more empathic than human-only responses nearly 47% of the time and equivalent in empathy roughly 16% of the time; that is, the responses enhanced by human-AI collaboration were preferred more often than those authored solely by humans. Using their 0-6 empathy classification model, the researchers also found that the human-AI approach yielded responses containing 20% higher levels of empathy compared to their human-only generated counterparts. 

In addition to analyzing the conversations, the team asked the members of the human-AI group about their impressions of the tool. More than 60% reported that they found HAILEY’s suggestions helpful and/or actionable, and 77% would like to have such a feedback tool available on the real-world platform. According to co-author and Allen School Ph.D. student Inna Lin, although the team had hypothesized that human-AI collaboration would increase empathy, she and her colleagues were “pleasantly surprised” by the results. 

“The majority of participants who interacted with HAILEY reported feeling more confident in their ability to offer support after using the tool,” Lin noted. “Perhaps most encouraging, the people who reported to us that they have the hardest time incorporating more empathy into their responses improved the most when using HAILEY. We found that for these users, the gains in empathy from employing human-AI collaboration were 27% higher than for people who did not find it as challenging.”

A row of four illustrations of a smartphone interface displaying the header "Facilitating Empathic Conversations" followed by a seeker post "My job is becoming more and more stressful with each passing day" and with two buttons, "Flag" and "Next," at the bottom. The first image shows an AI agent offering "Would you like some help with your response? The second shows a draft response entered: "Don't worry! I'm there for you." The third image shows suggested edits: replace "Don't worry!" with "It must be a real struggle!" and insert "Have you tried talking to your boss?" The fourth image shows the response incorporating the previous suggestions, with a message from the AI agent: "Looks great. Tap here for more help."
An example of how HAILEY assists peer supporters to incorporate more empathic language in their responses to people seeking mental health support.

According to co-author Adam Miner, a licensed clinical psychologist and clinical assistant professor in Stanford University’s Department of Psychiatry and Behavioral Sciences, HAILEY is an example of how to leverage AI for mental health support in a safe and human-centered way.

“Our approach keeps humans in the driver’s seat, while providing real-time feedback about empathy when it matters the most,” said Miner. “AI has great potential to improve mental health support, but user consent, respect and autonomy must be central from the start.”

Portrait of Tim Althoff wearing a navy blue and sage green checked button-down shirt and dark-framed eyeglasses in a building atrium with blurred green-tinged glass, metal and wood accents in the background.
Tim Althoff

To that end, the team notes that more work needs to be done before a tool like HAILEY will be ready for real-world deployment. Those considerations range from the practical, such as how to effectively filter out inappropriate content and scale up the system’s ability to provide feedback on thousands of conversations simultaneously and in real-time, to the ethical, such as what disclosures should be made about the role of AI in response to people seeking support.

“People might wonder ‘why use AI’ for this aspect of human connection,” Althoff said in an interview with UW News. “In fact, we designed the system from the ground up not to take away from this meaningful person-person interaction.

“Our study shows that AI can even help enhance this interpersonal connection,” he added.

Read the UW News Q&A with Althoff here and the Nature Machine Intelligence paper here. Read more →

Luis Ceze named Fellow of the Association for Computing Machinery for advancing new paradigms in computer architecture and programming systems

Portrait of Luis Ceze standing with arms crossed, smiling at the camera, against a grey background. Luis is wearing glasses with dark acrylic frames and clear lenses, black short-sleeved, open-necked shirt, and a black smartwatch on his left wrist.

Since he first arrived at the University of Washington in 2007, Allen School professor Luis Ceze has worn many hats: teacher, mentor, researcher, entrepreneur, venture investor. As of this week, he can add Fellow of the Association for Computing Machinery to that list after the organization bestowed upon him its most prestigious level of membership for “contributions to developing new architectures and programming systems for emerging applications and computing technologies.”

A computer architect by training, Ceze has been at the forefront of an expanding vision of the future of computation — and challenging the computer architecture community to rethink what a computer even is, thanks in part to some nifty research at the intersection of information technology and biology. His work also has extended to reimagining the hardware/software stack and embracing the emerging capabilities of machine learning. 

“I’m motivated by the question of how we can build new programming models with and for future technologies and applications,” said Ceze, the inaugural holder of the Edward D. Lazowska Endowed Professorship at the Allen School. “There is so much untapped potential in drastically improving efficiency, enabling new types of applications, and making use of new hardware and device technology. From machine learning to automated hardware/software to molecular programming, we are in the midst of a new computing revolution.”

Ceze has played a significant role in enabling that revolution, having broken new ground with his work on DNA-based data storage and computing. As co-director of the Molecular Information Systems Lab, Ceze has teamed up with Allen School colleagues, Microsoft researchers and synthetic DNA supplier Twist Bioscience on an ambitious series of projects that demonstrate synthetic DNA’s potential as a data storage medium, developing a process for converting those digital 0s and 1s into the As, Ts, Cs and Gs of DNA — and then, crucially, back again — that combined advances in biotechnology with computational techniques such as error encoding schemes.

“Life has produced this fantastic molecule called DNA that efficiently stores all kinds of information about your genes and how a living system works — it’s very, very compact and very durable,” Ceze explained in a UW News release in 2016. “This is an example where we’re borrowing something from nature — DNA — to store information. But we’re using something we know from computers — how to correct memory errors — and applying that back to nature’s ‘device technology.’ “

Since their initial paper appeared at the International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), Ceze and his MISL collaborators have set a new record for the amount of data stored in DNA, demonstrated the ability to perform random access to selectively retrieve stored files and convert them back to digital format, and developed a method for performing content-based similarity search of digital image files stored in DNA — moving past an initial focus on DNA’s prospects as an archival storage medium to, as Ceze observed at the time, “pave the way for hybrid molecular-electronic computer systems.” The team also built a prototype of an automated, end-to-end system for encoding data in DNA. 

Ceze subsequently initiated a collaboration with Seattle-based artist Kate Thompson to produce a portrait of pioneering British scientist Rosalind Franklin — the first person to have captured an image of the DNA double-helix — using paint infused with synthetic DNA in which the lab had encoded photos of memories collected from people around the world. Since then, Ceze and his fellow MISL researchers have branched out to develop a new platform for digital microfluidics automation — also known as “lab on a chip” — as well as a portable molecular tagging system and the capability for living cells to interface with computers.

“Our initial work on DNA data storage helped motivate and inform U.S. government research investment in this space, and then it expanded to other directions,” Ceze said. “And it was brought about by a collaborative team involving computer system architects, molecular biologists, machine learning engineers, and others. What we have in common is a curiosity and an excitement about what computing can learn from biology, and vice versa. Not many computer science schools have their own wet lab!”

Ceze didn’t need a wet lab for his other innovation: TVM, short for Tensor Virtual Machine, a flexible, efficient, end-to-end optimization framework for deploying machine learning applications across a variety of hardware platforms. Developed by a team that combined expertise in computer architecture, systems and machine learning, TVM bridged the gap between deep learning systems optimized for productivity and various hardware platforms, each of which are accompanied by their own programming, performance and efficiency constraints. TVM would allow researchers and practitioners to rapidly deploy deep learning applications on a range of systems — from mobile phones, to embedded devices, to and specialized chips — without having to sacrifice battery power or speed.

“Efficient deep learning needs specialized hardware,” Ceze noted at the time. “Being able to quickly prototype systems using FPGAs and new experimental ASICs is of extreme value.”

Ceze and his collaborators later teamed up with Amazon Web Services to build upon the TVM stack with the NNVM — short for Network Virtual Machine — compiler for deploying deep learning frameworks across a variety of platforms and devices. A year after TVM’s initial release, the team introduced the Versatile Tensor Accelerator, or VTA, an open-source customizable deep-learning accelerator for exploring hardware-software co-design that enables researchers to rapidly explore novel network architectures and data representations that would otherwise require specialized hardware support. 

The team eventually handed off TVM to the non-profit Apache Software Foundation as an incubator project. Ceze subsequently co-founded a company, OctoML, that builds upon and uses the Apache TVM framework to help companies deploy machine learning applications on any hardware, reducing effort and operational costs. To date, the UW spinout — for which Ceze serves as CEO — has raised $132 million from investors and currently employs more than 130 people, with the majority in Seattle and the rest spread across the U.S. and abroad. 

Before delving into deep learning accelerators and DNA synthesizers, Ceze made his mark in approximate computing. Combining aspects of programming languages, compilers, processor and accelerator architectures, machine learning, storage technologies, and wireless communication, Ceze and his colleagues developed a principled approach for identifying permissible tradeoffs between the correctness and efficiency of certain applications, such as those for search and video, to achieve significant energy savings in exchange for minimal sacrifices in output quality. 

Their initial contributions revolved around EnerJ — referred to as “the language of good-enough computing” — is a Java extension that enables developers to designate which program components should yield precise or approximate results to achieve performance savings and then check the quality of output and recompute or reduce the approximation as warranted. The team also developed a pair of hardware innovations in the form of an instruction set architecture (ISA) extension that provided for approximation operations and storage along with a dual-voltage microarchitecture, called Truffle, that enabled both approximate and precise computation to be controlled at a fine grain by the compiler. Ceze and his colleagues subsequently proposed a new technique for accelerating approximate programs using low-power neural processing units and dual mechanisms for approximate data storage that improves the performance and density while extending the usable life of solid-state storage technologies such as Flash.

In addition to his roles at the Allen School and OctoML, in his “free time” Ceze is also a venture partner at Madrona Venture Group and chairs their technical advisory board. Madrona funded OctoML and his first startup, Corensic, that was spun out of the UW in 2008. Before his ascension to ACM Fellow, Ceze shared the ACM SIGARCH Maurice Wilkes Award from the ACM Special Interest Group on Computer Architecture with MISL co-director and Allen School affiliate professor Karin Strauss, senior principal research manager at Microsoft. He is the co-author of multiple Best Papers and IEEE Micro Top Picks and holds a total of 29 patents based on his research. To date, he has guided 23 Ph.D. students as they earned their degrees on their way to launching careers in academia or industry.

“Computing is an extremely rich field of intellectual pursuit, and it is especially exciting now with the convergence of abundant computing resources, new AI techniques, and the ability to interact with natural systems from the molecular level all the way to the cognitive level,” said Ceze. “I’m honored by this recognition and am extremely grateful to all my Ph.D. advisees and collaborators for contributing so much to the work and to my career!”

Read the ACM announcement here.

Congratulations, Luis! Read more →

« Newer PostsOlder Posts »