Left to right: Anna Karlin, Nathan Klein and Shayan Oveis Gharan
One of the core problems underpinning the theory and practice of computer science that extends to multiple other domains, the Traveling Salesperson Problem (TSP) seeks to find the most efficient route over multiple destinations and back to the starting point. Short of finding the optimum, computer scientists have relied on approximation algorithms to chip away at the problem by finding an answer that is, as Klein described in a piece he wrote for The Conversation, “good enough.” As the name suggests, TSP has obvious applications in transportation and logistics. But, Klein explained, a solution will have implications for multiple other domains, from genomics to circuit board design.
“This is important for more than just planning routes,” Klein wrote. “Any of these hard problems can be encoded in the Traveling Salesperson Problem, and vice versa: Solve one and you’ve solved them all.”
Klein and his colleagues may not have completely solved the problem, but they brought the field an important — and hugely symbolic — step closer. Their algorithm, which they revealed to the world last summer, is the first to break the elusive 50% threshold for TSP — meaning the cost of its result will always be less than 50% above the optimum. While their result represents only a marginal improvement over the previous state of the art in real terms, at roughly 49.99999%, the team’s achievement offers a foundation on which they and their fellow researchers can build in future work. And build they most assuredly will; according to Karlin, once a researcher encounters TSP, it is very difficult to set it aside.
“One of my favorite theoretical computer scientists, Christos Papadimitriou, summed it up perfectly,” observed Karlin. “He said, ‘TSP is not a problem. It is an addiction.’”
Her faculty colleague Oveis Gharan knows the feeling well, having spent the past 10 years tackling various elements of TSP. For him, this milestone is another step forward on an open-ended journey — and one he is keen to delve into more deeply now that the dust has settled and the proof has checked out.
“A natural follow-up to our result would be to find a simpler proof with a significantly bigger improvement over the 50%, but to be honest, that is not really what interests me,” said Oveis Gharan. “Often, when someone breaks a long-standing barrier, they develop or employ several new tools or ideas in the process. These typically are not well understood in their full generality, because they are mainly specialized for that specific application. So, what I want to do after such a result is to better understand these new tools and ideas. Why do they work? What other applications can they possibly have? Can we generalize them further? Is there any other open problem that they can help us address? We’re exploring these questions now with regard to some of the tools that we used in this TSP paper.”
Left to right: Rachel Lin, Aayush Jain and Amit Sahai
Lin’s work with Jain and Sahai on indistinguishability obfuscation (iO) also represents a discovery that was decades in the making. In this case, the researchers proved the security of a potentially powerful cryptographic scheme for protecting computer programs from would-be hackers by making them unintelligible to adversaries while retaining their functionality.
The team’s breakthrough paper demonstrates that provably secure iO is constructed from subexponential hardness of four well-founded assumptions: Symmetric External Diffie-Hellman (SXDH) on pairing groups, Learning with Errors (LWE), Learning Parity with Noise (LPN) over large fields, and a Boolean Pseudo-Random Generator (PRG) that is very simple to compute. All four have a long history of study that is well-rooted in complexity, coding and number theory — enabling Lin and her collaborators to finally put a cryptographic approach that was first described in 1976, and mathematically formalized 20 years ago, on a firm footing. The researchers also devised a simple new method for leveraging LPN over fields and PRG in NC0 — a simple model of computation in which every output bit depends on a constant number of input bits — to build a structured-seed PRG (sPRG). The seed, which comprises both a public and private part, maintains the scheme’s pseudo-randomness in cases where an adversary is able to see the public seed and the sPRG output.
Now that they have verified the security of iO based on well-founded assumptions, Lin and her collaborators are already working to extend their results — and they are eager to see others expand upon them, as well.
“The next step is further pushing the envelope and constructing iO from weaker assumptions,” Lin said when the team announced its results. “At the same time, we shall try to improve the efficiency of the solutions, which at the moment is very far from being practical. These are ambitious goals that will need the joint effort from the entire cryptography community.”
STOC is organized by the Association for Computing Machinery’s Special Interest Group on Algorithms and Computation Theory (ACM SIGACT).
What if you could engage with the world — go to class, do your job, meet up with friends, get a workout in — without leaving the comfort of your bedroom thanks to mixed-reality technology?
What if you could customize your avatar in many ways, but you couldn’t make it reflect your true racial or cultural identity? What if the best way to get a high-paying job was based on your access to this technology, only you are blind and the technology was not designed for you?
And what if notions of equity and justice manufactured in that virtual world — at least, for some users — still don’t carry over into the real one?
These and many other questions come to mind as one reads “Our Reality,” a new science-fiction novella authored by Tadayoshi (Yoshi) Kohno, a professor in the University of Washington’s Paul G. Allen School of Computer Science & Engineering. Kohno, who co-directs the UW’s Security and Privacy Research Lab and Tech Policy Lab, has long regarded science fiction as a powerful means through which to critically examine the relationship between technology and society. He even dealt previously with the issue in a short piece he contributed to the Tech Policy Lab’s recent anthology, “Telling Stories,” which explores culturally-responsive artificial intelligence (AI).
“Our Reality,” so named for the mixed-reality technology he imagines taking hold in the post-pandemic world circa 2034, is Kohno’s first foray into long-form fiction. The Allen School recently sat down with Kohno with the help of another virtual technology — Zoom — to discuss his inspiration for the story, his personal as well as professional journey to examining matters of racism, equity and justice, and how technology meant to help may also do harm.
First of all, congratulations on the publication of your first novella! As a professor of computer science and a well-known cybersecurity researcher, did you ever expect to be writing a work of science fiction?
Yoshi Kohno: I’ve always been interested in science fiction, ever since I was a kid. I’ve carried that fascination into my career as a computer science educator and researcher. Around 10 years ago, I published a paper on science fiction prototyping and computer security education that talked about my experiences asking students in my undergraduate computer security course to write science fiction as part of the syllabus. Writing science fiction means creating a fictional world and then putting technology inside that world. Doing so forces the writer — the students, in my course — to explore the relationship between society and technology. It has been a dream of mine to write science fiction of my own. In fact, when conducting my research — on automotive security, or cell phone surveillance, or anything else — I have often discussed with my students and colleagues how our results would make good plot elements for a science fiction story! The teenage me would be excited that I finally did it.
What was the inspiration for “Our Reality,” and what made you decide now was the right time to realize that childhood dream?
YK: Recently, many people experienced a significant awakening to the harms that technology can bring and to the injustices in society. We, as a society and as computer scientists, have a responsibility to address these injustices. I have written a number of scientific, scholarly papers. But I realized that some of the most important works in history are fiction. Think about the book “1984,” for example, and how often people quote from it. Now, I know my book will not be nearly as transformative as “1984,” but the impact of that book and others inspired me.
I was particularly disturbed by how racism can manifest in technologies. As an educator, I believe that it is our responsibility to help students understand how to create technologies that are just, and certainly that are not racist. I wrote my story with the hope that it would inform the reader’s understanding of racism and racism in technology. At the same time, I also want to explicitly acknowledge that there are already amazing books on the topic of racism and technology. Consider, for example, Dr. Ruha Benjamin’s book “Race After Technology,” or Dr. Safiya Umoja Noble’s book “Algorithms of Oppression.” I hope that readers committed to addressing injustices in society and in technology read these books, as well as the other books I cite in the “Suggested Readings” section of my novella.
In addition to racism in technology, my story also tries to surface a number of other important issues. I sometimes intentionally added flaws to the technologies featured in “Our Reality” — flaws that can serve as starting points for conversations.
Yoshi Kohno (Dennis Wise/University of Washington)
How did your role as the Allen School’s Associate Director for Diversity, Equity, Inclusion and Access inform your approach to the story?
YK: I view diversity, equity, inclusion, and access as important to me as an individual, important for our school, and important for society. I’ve thought about the connection between society and technology throughout my career. As a security researcher and educator, I have to think about the harms that technologies could bring. But when I took on the role of Associate Director, I realized that I had a lot more learning to do. I devoured as many resources as I could, I talked with many other people with expertise far greater than my own, I read many books, I enrolled in the Cultural Competence in Computing (3C) Fellows Program, and so on. This is one of the things that we as a computer science community need to always be doing. We should always be open to learning more, to questioning our assumptions, and to realizing that we are not the experts.
One of the things that I’ve always cared about as an educator was helping students understand not only the relationship between society and technology, but to recognize how technologies can do harm. I’m motivated to get people thinking about these issues and proactively working to mitigate those harmful impacts. If people are not already thinking about the injustices that technologies can create, then I hope that “Our Reality”can help them start.
One of the main characters in “Our Reality,” Emma, is a teenage Black girl. How did you approach writing a character whose identity and experiences would be so different from your own?
YK: Your question is very good. I have several answers that I want to give. First, this question connects to one of the teaching goals of “Our Reality” and, in particular, to Question 6 in the “Questions for Readers” section of the novella. Essentially, how should those who design technologies approach their work when the users or affected stakeholders might have identities very different from their own? And how do they ensure that they don’t rely on stereotypes or somehow create an injustice for those users or affected stakeholders? Similarly, in writing “Our Reality,” and with my goal of contributing to a discussion about racism in technology, I found myself in the position of centering Emma, a teenage girl who is Black. This was a huge responsibility, and I knew that I needed to approach this responsibility respectfully and mindfully. My own identity is that of a cisgender man who is Japanese and white.
The first thing I did towards writing Emma was to buy the book “Writing the Other“ by Nisi Shawl and Cynthia Ward. It is an amazing book, and I’d recommend it for anyone else who wishes to either write stories that include characters with identities other than their own. I read their book last summer. I then discovered that Shawl was teaching a year-long course in science fiction writing through the Hugo House here in Seattle, so I enrolled. That taught me a significant amount about writing science fiction and writing about diverse characters.
I should acknowledge that while I tried the best I could, I might have made mistakes. While I have written many versions of this story, and while I’ve received amazingly useful feedback along the way, any mistakes that remain are mine and mine alone.
In your story, Our Reality is also the name of a technology that allows users to experience a virtual, mixed-reality world. One of the other themes that stood out is who has access to technologies like Our Reality — the name implies a shared reality, but it’s only really shared by those who can afford it. There are the “haves” like Emma and her family, and the “have nots” like Liam and his family. How worried are you that unequal access to technology will exacerbate societal divides?
YK: I’m quite worried that technology will exacerbate inequity along many dimensions. In the context of mixed-reality technology like Our Reality, I think there are three reasons that a group of people might not access it. One is financial; people may not be able to afford the mixed-reality Goggles, the subscription, the in-app purchases, and so on. They either can’t access it at all or will have unequal access to its features, like Liam discovered when he tried to customize his avatar beyond the basic free settings. The second reason is because the technology was not designed for them. I alluded to this in the story when I brought up Liam’s classmate Mathias, who is blind. Finally, some people will elect not to access the technology for ideological reasons. When I think about the future of mixed-reality technologies, like Our Reality, I worry that society will further fracture into different groups, the “haves” or “designed fors” and the “have nots” or “not designed fors.”
Emma’s mother is a Black woman who holds a leadership position in the company that makes Our Reality, but her influence is still limited. For example, Emma objects to the company’s decision to make avatars generic and “raceless,” which means she can’t fully be herself in the virtual world. What did you hope people would take away from that aspect of the story?
YK: First, this is an example of one of the faults that I intentionally included in the technologies in “Our Reality.” I also want to point the reader to the companion document that I prepared, which describes in more detail some of the educational content that I tried to include in Our Reality. Your question connects to so many important topics, such as the notion of “the unmarked state” — the default persona that one envisions if they are not provided with additional information — as well as colorblind racism. This also connects to something that Dr. Noble discusses in the book “Algorithms of Oppression”and which I tried to surface in “Our Reality” — that not only do we need to increase the diversity within the field, but we need to overcome systemic issues that stand in the way of fully considering the needs of all people, and systemic inequities, in the design of technologies.
Stepping back, I am hoping that readers start to think about some of these issues as they read “Our Reality.” I hope that they realize that the situation described in “Our Reality” is unjust and inequitable. I hope they read the companion document, to understand the educational content that I incorporated into “Our Reality.” And then I hope that readers are inspired to read more authoritative works, like “Algorithms of Oppression” and the other books that I reference in the novella and in the companion document.
You and professor Franziska Roesner, also in the Allen School, have done some very interesting research with your students in the UW Security and Privacy Research Lab. Your novella incorporates several references to issues raised in that research, such as tracking people via online advertising and how to safeguard users of mixed-reality technologies from undesirable or dangerous content. It almost feels uncomfortably close to your version of 2034 already. So how can we as a society, along with our regulatory frameworks, catch up and keep up with the pace of innovation?
YK: Rather than having society and our regulatory frameworks catch up to the pace of innovation, we should consider slowing the pace of innovation. Often there is a tendency to think that technology will solve our problems; if we just build the next technology, things will be great, or so it seems like people often think. Instead of perpetuating that mentality, maybe we should slow down and be more thoughtful about the long-term implications of technologies before we build them — and before we need to put any regulatory framework in place.
Kohno and colleague Franziska Roesner have explored the privacy and security of mixed reality technologies with students in the Security and Privacy Research Lab
As part of changing the pace of innovation, we need to make sure that the innovators of technology understand the broader society and global context in which technologies exist. This is one of the reasons why I appreciate the Cultural Competence in Computing (3C) Fellows Program coming out of Duke so much, and why I encourage other educators to apply. That program was created by Dr. Nicki Washington, Dr. Shaundra B. Daily, and graduate assistant Cecilé Sadler at Duke University. The program’s goal is to help empower computer science educators, throughout the world, with the knowledge and skills necessary to help students understand the broader societal context in which technologies sit.
As an aside, one of the reasons that my colleagues Ryan Calo in the School of Law and Batya Friedman in the iSchool and I co-founded the Tech Policy Lab at the University of Washington is that we understood the need for policymakers and technologists to also come together and explore issues at the intersection between society, technology, and policy.
Speaking of understanding context, in the companion document to “Our Reality” you note “Computing systems can make the world a far better place for some, but a far worse one for others.” Can you elaborate?
YK: There are numerous examples of how technologies, when one looks at them from a 50,000 foot perspective, might seem to be beneficial to individuals or society. But when one looks more closely at the specific case of specific individuals, you find that they’re not providing a benefit; in fact, they have the potential to actively cause harm. Consider, for example, an app that helps a user find the location of their family or friends. Such an app might seem generally beneficial — it could help a parent or guardian find their child if they get separated at a park. But now consider situations of domestic abuse. Someone could use that same technology to track and harm their victim.
Another example, which I explored in “Our Reality” through Emma’s encounter with the police drones, is inequity across different races. Face detection and recognition systems are now widely understood to be inequitable because they have decreased accuracy with Black people compared to white people. This is incredibly inequitable and unjust. I encourage readers to learn more about the inequities with face detection and face recognition. One great place to start is the film “Coded Bias” directed and produced by Shalini Kantayya, which centers MIT Media Lab researcher Joy Buolamwini.
At one point, Emma admonishes her mother, a technologist, that she can’t solve everything with technology. How do we determine what is the best use of technology, and what is the responsibility of your colleagues who are the ones inventing it?
YK: I think that it is absolutely critical for those who are driving innovation to understand how the technology that they create sits within a broader society and interacts with people, many of whom are different from themselves. I referred earlier to this notion of a default persona, also called the “unmarked state.” Drawing from Nisi Shawl and Cynthia Ward’s book “Writing the Other,” this is more often than not someone who is white, male, heterosexual, single, young, and with no disabilities. Not only should one be thinking about how a technology would fit in the context of society, but also consider it in the context of the many people who do not identify with this default persona.
On top of that, when designing technologies for someone “not like me,” people need to be sure they are not invoking stereotypes or false assumptions about those who are not like themselves. There’s a book called “Design Justice” by Dr. Sasha Costanza-Chock about centering the communities for whom we are designing. As technologists, we ought to be working with those stakeholders to understand what technologies they need. And we shouldn’t presume that any specific technology is needed. It could be that a new technology is not needed.
Kohno drew inspiration from an academic-industry summit on mixed reality that he and Roesner co-organized in exploring the potential pitfalls of the technology in “Our Reality”
Some aspects of Our Reality sound like fun — for example, when Emma and Liam played around with zero gravity in the science lab. If you had the opportunity and the means to use Our Reality, would you?
YK: I think it is an open research question about what augmented reality and mixed-reality technologies will be like in the next 15 years. I do think that technologies like Our Reality will exist in the not-too-distant future. But I hope that the people developing these technologies will have addressed the access questions and societal implications that I raised in the story. As written, I think I would enjoy aspects of the technology, but I would not feel comfortable using it if the equity issues surrounding Our Reality aren’t addressed.
Stepping even further back, there are a whole class of risks with mixed-reality technologies that are not deeply surfaced in this story: computer security risks. This is a topic that Franziska Roesner and I have been studying at UW for about 10 years, along with our colleagues and students. There are a lot of challenges to securing future mixed-reality platforms and applications.
So you would be one of those ideological objectors you mentioned earlier.
YK: I would, yes. And, in addition to issues of access and equity and the various security risks, I used to also be a yoga instructor. I like to see and experience the world through my real senses. I fear that mixed-reality technologies are coming. But for me, personally, I don’t want to lose the ability to experience the world for real, rather than through Goggles.
Who did you have in mind as the primary audience for “Our Reality”?
YK: I had several primary audiences, actually. In a dream world, I would love to see middle school students reading and discussing “Our Reality” in their social studies classes. I would love for the next generation to start discussing issues at the intersection of society and technology before they become technologists. If students discuss these issues in middle school, then maybe it will become second nature for them to always consider the relationship between society and technology, and how technologies can create or perpetuate injustices and inequities.
I would also love for high school and first- and second-year college students to read this story. And, of course, I would love for more senior computer scientists — advanced undergraduate students and people in industry — to read this story, too. I also hope that people read the books that I reference in the Suggested Readings section of my novella and the companion document. Those references are excellent. My novella scratches the surface of important issues, and provides a starting point for deeper considerations; the books that I reference provide much greater detail and depth.
As an educator, I wanted the story to be as accessible as possible, to the broadest audience possible. That’s why I put a free PDF of the novella on my website. I also put a PDF of the companion document on my web page. I wrote the companion document in such a way that I hope it will be useful and educational to people even if they never read the “Our Reality”novella.
What are the main lessons you hope readers will take away from “Our Reality”?
YK: I hope that readers will understand the importance of considering the relationship between society and technology. I hope that readers will understand that it is not inevitable that technologies be created. I hope that readers realize that when we do create a technology, we should do so in a responsible way that fully acknowledges and considers the full range of stakeholders and the present and future relationships between that technology and society.
Also, I tried to be somewhat overt about the flaws in the technologies featured in “Our Reality.” As I said earlier, I intentionally included flaws in the technologies in “Our Reality,” for educational purposes. But when one interacts with a brand new technology in the real world, sometimes there are flaws, but those flaws are not as obvious. I would like to encourage both users and designers of technology to be critical in their consideration of new technologies, so that they can proactively spot those flaws from an equity and justice perspective.
If my story reaches people who have not been thinking about the relationship between society, racism, and technology already, I hope “Our Reality” starts them down the path of learning more. I encourage these readers to look at the “Our Reality”companion document, and explore some of the other resources that I reference. I would like to also thank these readers for caring about such an important topic.
Readers may purchase the paperback or Kindle version of “Our Reality” on Amazon.com, and access a free downloadable PDF of the novella, the companion document, and a full list of resources on the “Our Reality” webpage.
Paul Beame, an associate director of the Allen School and a professor in the Theory of Computation group, has been honored with the 2021 ACM SIGACT Distinguished Service Award for his more than 20 years of dedicated and effective support to the theoretical computer science community. Each year, the Association for Computing Machinery’s Special Interest Group on Algorithms and Computation Theory presents the award to an individual or group who has gone above and beyond in service to their colleagues even as they work to advance the foundations of computing.
Since joining the University of Washington faculty in 1987, Beame has devoted much of his research career to exploring pure and applied complexity to address a range of computational problems in multiple domains. Beyond his research contributions, Beame has dedicated significant time and effort to promoting knowledge-sharing and collaboration within the theoretical computer science community through various leadership roles in ACM, SIGACT, and its sibling organization within the IEEE Computer Society, the Technical Committee on Mathematical Foundations (TCMF). He is also credited with advancing two of the field’s flagship conferences, the ACM Symposium on the Theory of Computing (STOC) and the IEEE Symposium on Foundations of Computer Science (FOCS), through a combination of formal and informal leadership and service.
FOCS 1999 marked Beame’s first foray into conference leadership, when he took on the role of program committee chair and served informally as a local co-organizer. He subsequently went on to serve as general chair, finance chair, and/or local co-organizer for no fewer than half a dozen more FOCS and STOC conferences between 2006 and 2011. Beyond fulfilling his official leadership duties, over the years Beame has provided organizers of numerous successive conferences with unofficial assistance ranging from identifying potential venues, to recruiting local volunteers, to addressing issues with the online registration system.
“Paul’s tireless service, in both official and unofficial roles, has been highly instrumental in ensuring the smooth running and continued vitality of the flagship STOC and FOCS conferences and of SIGACT itself,” said ACM SIGACT Chair Samir Khuller, the Peter and Adrienne Barris Professor and Chair of the Computer Science Department at Northwestern University. “In addition to his many official roles, Paul’s influence is also felt, equally significantly, through his unofficial roles as ‘SIGACT oracle’ and advisor to those responsible for running our main conferences every year.”
Following the completion of his term as chair of IEEE’s TCMF in 2012, Beame began his tenure as chair of ACM SIGACT. During the next six years, as he served out his term as chair and immediate past chair of the group, Beame is credited with promoting greater cooperation between the two groups that represented the theoretical computer science community in parallel. He was also instrumental in revitalizing STOC through the introduction of TheoryFest, for which he co-planned and co-organized the first two events in 2017 and 2018.
As a leader, Beame is known for building in mechanisms for ensuring good management practices will carry forward while building up the infrastructure for running conferences and committees that enables his successors to seamlessly take the reins. And he hasn’t been shy about challenging the conventional wisdom along the way.
“There are typically two distinct types of exceptional leaders for professional organizations,” observed David Shmoys, Laibe/Acheson Professor at Cornell University. “First, there are the people who are detail-focused and excel at making sure that the myriad of lower-level processes work with the precision of a well-engineered system — in a pre-digital world, the standard metaphor would be a Swiss watch. Second are the ‘out-of-the-box’ thinkers who aren’t content with making things run as well as possible within the framework of the status quo, but instead push the organization to improve upon itself by inventing new ways that the systems can run. Paul excels in both dimensions, and this is what makes his service extraordinarily remarkable.”
Even after officially passing the baton to the next slate of leaders and volunteers, Beame continues to be generous with his time and advice to help sustain the community’s momentum.
“Whereas many people who have served in leadership roles view stepping down as a chance to leave obligations behind,” Khuller noted, “Paul has repeatedly viewed it as an opportunity to smooth the path for future incumbents.”
In addition to laying the groundwork for future leaders, Beame was also a vocal champion of open access to SIGACT and other ACM conference proceedings during his time representing SIGACT and the SIG Board. Despite entrenched opposition from some quarters, his position ultimately prevailed. Beginning in 2015, all SIGACT and many other SIG conference proceedings were made open-access in perpetuity via their respective conference websites — including the STOC website, which Beame himself initially built and maintained along with the FOCS website to provide an online history of past conferences and their proceedings.
“This award is a well-earned honor for Paul, who has been tireless in helping SIGACT move forward,” said Allen School professor emeritus Richard Ladner.
Ladner’s faculty colleague Anna Karlin agreed. “For two decades, Paul has repeatedly stepped up to provide every manner of service to the SIGACT community,” she said. “He is an exemplary recipient of this award.”
Yasaman Sefidgar, a Ph.D. student working with Allen School professor James Fogarty, has been named a 2021 Facebook Fellow for her research developing computational and data-driven systems that inform and support social and health interventions. Sefidgar’s work currently is focused on studying the powerful effects of social support and how to make it more accessible with online platforms, especially during difficult times.
Sefidgar aims to use her Fellowship to understand the emotional needs of people online — specifically among minority populations who experience particular incidents like microaggressions. Her findings will help her design systems that can aid people coping with distress. In her previous work with students struggling with microaggression, she found that those who successfully worked through it bounced back quickly after seeking support from others who had similar experiences and could offer guidance. The earlier they found this support, the better their recovery. Sefidgar wants to connect people based on their experiences and the impact the connection makes, then guide the social interaction dynamics to make the connection a positive one.
After studying insights from interviews and a large dataset, and building a framework to analyze the collected stories and encounters, Sefidgar will curate algorithms to build a platform that can support more specific needs to promote well-being.
“We need additional knowledge of mechanisms of effective social support in situations of interest. My mixed-method research addresses both fronts,” said Sefidgar. “Qualitative accounts of how individuals currently seek support would highlight the challenges and barriers that technology solutions might address. They can also inform quantitative analysis that not only confirms the qualitative insights but also brings to light the mechanistic elements of effective support.”
Sefidgar said that Facebook has the relevant resources to improve social support for well-being by leveraging the day-to-day experiences shared on its platforms to new recommendation policies and content curation. The company can also implement interventions that improve conversations.
“I hope findings from my work can inform the design of interfaces, interactions, and algorithms on Facebook and Instagram with the potential to benefit millions of users,” she said.
Before coming to the Allen School, Sefidgar obtained her B.S. in computer engineering from the Sharif University of Technology and M.S. degrees in human-computer interaction from the University of British Columbia and in computer vision from Simon Fraser University. She has co-authored 10 publications, including a Best Paper Honorable Mention for “Situated Tangible Robot Programming” from the 2017 International Conference on Human-Robot Interaction.
Sefidgar is one of only 21 doctoral students worldwide chosen to receive Facebook Fellowships this year based on their innovative research. Past Allen School recipients of the Facebook Fellowship include Minjoon Seo (2019), James Bornholt and Eunsol Choi (2018), and Aditya Vashistha (2016).
Last week marked the end of an academic year unlike any other — and the culmination of all of the hard work, hopes and dreams of a graduating class unlike any other. With the resumption of in-person pomp and circumstance precluded due to ongoing pandemic-related restrictions at the University of Washington, the Allen School marked this important milestone with a virtual tribute highlighting the Class of 2021’s commitment to excellence and service — and acknowledging their perseverance in the face of multiple, significant challenges over the past year.
Professor Magdalena Balazinska, director of the Allen School, led the tribute with a video message congratulating the graduates on rising to the occasion and completing their degrees under difficult circumstances. And now that they had, she urged them to use what they have learned to make the world a better place — and continue working to make computing a more inclusive field.
“As you graduate, you have the opportunity to make an impact in a variety of ways. Take it. Be bold,” Balazinska said. “Find ways to help the community and the world around you. Work on addressing humanity’s greatest challenges. Develop technology that serves everyone, not just those who look like you or share the same background or abilities. Strive to have diverse teams where everyone feels welcome and included.
“Making the world better is not only your opportunity, but it is your responsibility,” she continued. “Because if you don’t do it, then who will?”
Balazinska’s remarks were followed by a series of congratulatory messages to the graduates from her fellow faculty members, many of whom also contributed to a Class of 2021 Kudoboard that included messages from friends, family and classmates to the newly minted graduates. The Allen School also took the opportunity to recognize members of the graduating class who had particularly distinguished themselves through their academic achievements, leadership, and service contributions with its annual student awards.
From left: Skyler Hallinan, Joy He-Yueya and Parker Ruth
Skyler Hallinan is one of three graduates to receive the Allen School’s Outstanding Senior Award, which honors students with exceptional academic performance who exemplify leadership, good citizenship, and the pursuit of knowledge. Hallinan, who majored in computer science, applied & computational mathematical sciences, and bioengineering, was recognized for his undergraduate research in natural language processing under the guidance of Allen School professors Yejin Choi and Noah Smith, including techniques for analyzing misinformation and media bias and advancing commonsense reasoning. He also worked on the development of an orally ingestible hydrogel that would act as a substitute for a functional kidney in patients with chronic kidney disease and served as a teaching assistant (TA) in the Allen School for multiple quarters.
Joy He-Yueya and Parker Ruth each earned an Outstanding Senior Award and the Allen School’s Best Senior Thesis Award. He-Yueya, who earned her bachelor’s in computer science, worked with Allen School professor Tim Althoff in the Behavioral Data Science Group on research that led to her award-winning thesis, “Assessing the Relationship Between Routine and Schizophrenia Symptoms with Passively Sensed Measures of Behavioral Stability.” She also contributed to a project at the Max Planck Institute for Software Systems that used reinforcement learning to generate personalized curriculum for students learning to program. He-Yueya has served in multiple tutoring and peer mentoring roles — including helping other undergraduates to get their start in research.
Eric Fan (left) and Eunia Lee
Ruth — who previously received the Dean’s Medal for Academic Excellence from the College of Engineering — was recognized by the Allen School for his many contributions in undergraduate research as a member of the UbiComp Lab, where he advanced mobile health tools that screen for conditions such as osteoporosis and stroke; enable continuous physiological sensing; and monitor threats to public health. That work formed the basis of his award-winning senior thesis, “Design Principles for Mobile and Wearable Health Technologies,” advised by professor Shwetak Patel.
The Allen School honored two other graduating seniors — Eric Fan and Eunia Lee — with Undergraduate Service Awards. This award recognizes students who have gone above and beyond in supporting the many events and activities that contribute to a vibrant school community. Fan has helped build that sense of community both in and out of the classroom, whether in person or remote. He served as TA coordinator for the Allen School’s introductory programming series, the entré into computer science for many students in and outside of the major. He also served as an officer of the UW chapter of the Association for Computing Machinery (ACM) and a member-at-large of the Allen School’s Student Advisory Council. After earning his bachelor’s in computer engineering, Fan is continuing his studies in the Allen School’s combined B.S./M.S. program.
Lee, who earned her degree in computer science, has taken on multiple leadership roles in the Allen School’s K-12 and campus-level outreach, with a focus on strengthening diversity and inclusion. Her contributions have included service as a lead Ambassador to high schools and lead TA for the direct-to-major seminar that assists freshmen entering the Allen School. She also chaired the UW chapter of ACM and was instrumental in the formation of the Allen School’s Diversity & Access team.
Taylor Ka (left) and Andrew Wei
In addition to the awards to graduating students, the Allen School also commended half a dozen students who served as TAs in the past year with its Bob Bandes Memorial Awards for Excellence in Teaching. Service Award winner Eric Fan was among the Bandes Award recipients. Students appreciated Fan, who was a TA nine times, for being approachable and open to communication, including with those who took the course asynchronously: “Eric was a TA that contributed to my learning greatly, especially in a quarter that didn’t have the best circumstances surrounding it.”
One of Fan’s fellow Bandes Award winners, Taylor Ka, served as a TA for eight quarters of one of the courses in the introductory series, Computer Programming II. A faculty member called Ka, who earned her bachelor’s in computer science on the way to enrolling in the Allen School’s combined B.S./M.S. program, “easily the best TA I have had across all courses at UW” and highlighted her knowledge, kindness, and approachability. A student recommended another award recipient, Andrew Wei, for his generosity and patience, and recalled how Wei would stay up late multiple nights per week to work with students who were struggling with assignments. Wei, who graduated from the B.S./M.S. program, assisted with no fewer than eight courses over 12 quarters.
The Allen School bestowed Bandes Award Honorable Mentions on three TAs: undergraduate Kyrie Dowling, and Ph.D. students Liang He and Edward Misback. Dowling, who has assisted with Systems Programming and The Hardware/Software Interface, stood out for her high energy, skillful explanation of concepts, and the fact that “it is very evident she cares about the learning of all of her students.” Meanwhile, He was recognized for his “positivity and creative encouragement” in supporting students in studio-oriented courses that typically feature physically-oriented lectures and coursework in the remote learning environment — including soldering, preparing and packaging more than 50 kits for delivery to students wherever they happened to be. Last but not least, Misback was recognized for guiding students through the “numerous technical pitfalls” of networking and web serving while encouraging them to discover solutions for themselves. As one faculty nominator suggested, “He will make an excellent teacher if he chooses that path.”
Left to right: Kyrie Dowling, Liang He and Edward Misback
This year’s Bandes Award honorees were selected from among 185 TAs nominated by faculty and students. A total of 670 students served as TAs during the academic year, assisting faculty and their fellow students to make the most of remote learning. Thanks in part to their efforts, an estimated 635 Allen School students earned their degrees in 2020-2021.
As she commended the graduates for achieving their educational goals, Balazinska noted this was not the end of their journey, but rather the beginning.
“Whatever your next steps are, an Allen School degree opens so many opportunities for you. I encourage you to be courageous. Continue to reach for the stars,” said Balazinska. “Go out and make your mark on the world. Every single one of you has what it takes to do great things.”
Congratulations to all of our graduates! We look forward to seeing what you do next — and to welcoming you back as alumni!
Editor’s note: Although the Allen School will award an estimated 635 degrees for the 2020-2021 academic year, only those students who opted into having their information displayed publicly are included in the online tribute.
Researchers at the University of Washington contributed to four Best Papers and 11 Best Paper Honorable Mentions at the recent Conference on Human Factors in Computing Systems (CHI 2021) organized by the Association for Computing Machinery. The virtual conference offered UW researchers the opportunity to showcase the breadth and depth of their expertise in human-computer interaction (HCI) and design as well as the strength of interdisciplinary collaborations such as DUB. Members of the Allen School co-authored two of the Best Papers and eight of the papers recognized with Honorable Mentions.
In addition to the papers, Allen School professor Jeffrey Heer, director of the Interactive Data Lab, was inducted into the CHI Academy for his substantial, cumulative contributions to the field of human-computer interaction, social computing and data visualization. He is one of only eight new members elected to the Academy this year.
Authors of the Best Paper: “Falx: Synthesis-Powered Visualization Authoring,” from left to right, Wang, Bodik and Ko.
Allen School Ph.D. student Chenglong Wang and professor Rastislav Bodik of the Programming Languages & Software Engineering (PLSE) group and iSchool professor and Allen School adjunct professor Amy J. Ko, co-authored the award-winning paper, “Falx: Synthesis-Powered Visualization Authoring,” with professors Yu Feng at the University of California Santa Barbara and Isil Dilig at the University of Texas, Austin and former Allen School professor Alvin Cheung, now a faculty member at the University of California, Berkeley. In the publication, the team introduces Falx, a tool that enables data analysts to create data visualizations more easily through demonstrations. While modern visualization tools make data visualization design easier, when it comes to exploratory visualization, there can often be a mismatch of data layout and design that requires significant effort. Falx addresses the issue by automatically inferring the visualization specification and transforming the data to match the design from user demonstrations.
Allen School authors of the Best Paper:”‘That Courage to Encourage’: Participation and Aspiration in Chat-Based Peer Support for Youth Living with HIV,” Karusala and Anderson.
Allen School Ph.D. student Naveena Karusala and professor Richard Anderson of the Information and Communication for Technology Development (ICTD) Lab co-authored ‘‘’That courage to encourage’: Participation and Aspirations in Chat-based Peer Support for Youth Living with HIV” with UW Global Health professors Brandon Guthrie, Grace John-Stewart and Keshet Ronen; professor Megan A. Moreno at the University of Wisconsin, Madison; and Kenyatta National Hospital researchers David Odhiambo Seeh, who is also a research assistant at UW, Cyrus Mugo and Irene Inwani. For their award-winning paper, the team conducted a qualitative study of WhatsApp-based facilitated peer support groups for youth living with HIV in Nairobi, Kenya. While chat apps can make patient-provider communication and peer support more accessible outside of clinical settings, the experience of participants in such group chats in areas where phone sharing and intermittent data access are common is understudied. Using a combination of chat records and interviews with participants, the researchers found that the youth in the chat groups were motivated by newfound aspirations and a sense of community to manage their health despite the complexities of group dynamics, intermittent participation, and concerns about privacy. The paper offers takeaways for participation and privacy in chat-based health communities and how the role of aspirations can be factored into the design of health interventions.
In addition to the two award-winning papers, Allen School researchers contributed to eight papers that earned Honorable Mentions for addressing topics such as medical making during a pandemic, remote collaboration and communication, the impact of prison surveillance, and more:
The same team also contributed to “The Right to Help and the Right Help: Fostering and Regulating Collective Action in a Medical Making Reaction to COVID-19,” which explores the tension between the medical field’s “do no harm” ethos and maker communities’ desire to rapidly innovate in response to shortages of PPE. To address this tension and strike a balance between action-oriented and regulated practices, the researchers recommend that regulatory bodies build coalitions with makers, online platforms give communities more control over the presentation of information, and repositories to balance the need to distribute information while limiting the spread of misinformation.
You Gotta Watch What You Say”: Surveillance of Communication with Incarcerated People, by Allen School Ph.D. student Kentrell Owens, alumna Camille Cobb (Ph.D., ‘19), now a postdoc at Carnegie Mellon University, and CMU professor Lorrie Cranor examines the impact of third-party surveillance of communications between families and relatives who are in prison. Using semi-structured interviews, the team explored the implications of family members’ inaccurate understanding of surveillance and the misalignment of incentives between end-users and vendors to enhance ongoing conversations around carceral justice and the need for more privacy-sensitive communication tools.
All told, UW authors contributed a total of 50 papers presented at this year’s conference. See the complete list of UW CHI papers here.
After beginning his research and teaching career focused on theoretical computer science, Anderson embraced the opportunity to generate a tangible impact on underserved populations by helping to build up the emerging field of ICTD beginning in the early 2000s. One of his earliest contributions was to community-led education via the Digital StudyHall project, which brought high-quality teaching and interactive content to rural classrooms in India via facilitated video instruction. He subsequently teamed up with Seattle-based global health organization PATH on Projecting Health, an initiative aimed at using digital communications to help people in low-resource areas, including those with low literacy, to learn about and practice health-oriented behaviors to support maternal and child health. To date, Projecting Health has reached an estimated 190,000 people across 180 villages through local-produced videos addressing topics such as nutrition, immunization, and family planning.
“Through empowering community teams with this novel simplified filming and editing process to develop messaging for their own communities, Richard and our team achieved what rarely is achieved in global health — full ownership of a process by the very communities who would then use the output,” said Dr. Kiersten Israel-Ballard, the team lead for Maternal, Newborn, Child Health and Nutrition at PATH and an affiliate professor in the UW Department of Global Health. “As a global health specialist, it is rare to work with a visionary like Richard, who bridges fields and cultures to create innovative solutions. Our work is better having learned from him and his team.”
For the past six years, Anderson has led the development and deployment of open-source software tools for mobile data collection and analysis known as the Open Data Kit (ODK). Initially the brainchild of Anderson’s late friend and colleague Gaetano Borriello, ODK started out as a customizable survey tool designed to be “easy to try, easy to use, easy to modify and easy to scale.” Governments and non-profit organizations in more than 130 countries have relied on successive versions of ODK — including a second-generation version of the toolkit developed under Anderson’s leadership that enabled non-linear workflows and longitudinal surveys — to advance public health, wildlife conservation, election monitoring, essential infrastructure, and more.
Shortly after taking the helm, Anderson and his collaborators managed the successful transition of ODK from a UW initiative to a stand-alone enterprise. Anderson and his team subsequently extended the original software’s capabilities with the release of ODK-X, which enables users such as PATH, the World Mosquito Program and the International Federation of Red Cross and Red Crescent Societies to build customized Javascript-based apps for managing and visualizing data in the field in addition to the traditional survey forms. Recent applications of ODK-X include vaccine cold-chain management, vector-borne disease monitoring, and humanitarian response.
Anderson has also led the charge to bring secure digital financial services to areas that lack access to traditional banking. In 2016, Anderson and other members of the ICTD Lab joined forces with the Allen School’s Security and Privacy Research Lab to launch the Digital Financial Services Research Group (DFSRG) with the goal of making financial products such as mobile payments and savings accounts more accessible to underserved communities. With funding from the Bill & Melinda Gates Foundation, the DFSRG addresses fundamental challenges to the development and large-scale adoption of digital financial products to benefit some of the lowest-income people in the world to enable them to participate in digital commerce and ensure that, in Anderson’s own words, “an event like an accident or a pregnancy doesn’t send them over the edge.”
As one of the founding champions of the ICTD movement, Anderson has been instrumental in uniting various communities under the umbrella of ACM COMPASS — short for Computing and Sustainable Societies — and organizing conferences, workshops, and tutorials to engage more researchers, practitioners and students in this work. He has also been a vocal proponent of diversifying host countries to include conference sites such as Ecuador, Ghana and Pakistan. In conjunction with his research and community leadership, Anderson has been credited with demonstrating how to build effective collaborations between computer scientists and non-governmental organizations (NGOs). By combining the former’s technical expertise with the latter’s geographical and domain expertise, Anderson has forged partnerships that ensure solutions developed in the lab can be effectively deployed in the field by people without computing experience — and that they actually address the real-world problems of the people they aim to serve.
“Richard is a top-notch computer scientist and a more than capable teacher, but his real contribution is creating an environment in which CS innovation can be brought to bear on the real problems of real people in developing regions,” said Eric Brewer, a professor of computer science at the University of California, Berkeley who is also Fellow and VP Infrastructure at Google. “His work is inspiring to students across many disciplines, especially when they see the impact of his work on others.”
Anderson joined the Allen School faculty in 1986 after completing a postdoc at the Mathematical Sciences Research Institution in Berkeley, California. He earned his Ph.D. in computer science from Stanford University and his bachelor’s in mathematics from Reed College. Anderson is the first Allen School faculty member to receive the ACM Eugene L. Lawler Award, which is typically given once every two years in honor of an individual or group who has made a significant humanitarian contribution through the application of computing technology.
“We felt the work is going to have a significant impact on the community based on the algorithmic approach and the very thorough experiments that were done in the paper itself,” said Kira Radinsky, the Best Paper Award chair. “This is the direction we felt is going to bring the most value to the community.”
The paper, which was co-authored with UW Department of Psychiatry and Behavioral Sciences professor David Atkins and Stanford University Department of Psychiatry and Behavioral Sciences instructor Adam Miner, addresses improving web-based mental health conversations in online peer-to-peer support platforms, which could help improve access to treatment and reduce the global disease burden.
“We describe a system that can give feedback to help someone express empathy more effectively when supporting others based on natural language processing and machine learning innovations — reinforcement learning for dialogue,” said Althoff, who directs the Allen School’s Behavioral Data Science Group. “This is hugely exciting as this work has been a big step towards one of the biggest and most meaningful goals in my research for a long time.”
The team, which was based on a collaboration with the UW and Stanford Medical Schools and the largest online peer-to-peer support platform worldwide, TalkLife, found that online mental health conversations could have significantly more empathy than what is currently expressed. To facilitate conversations with higher empathy levels, which is key for providing successful support, they introduced a new task of empathic rewriting. Their approach employs artificial intelligence (AI) tools for identifying and improving empathy to effectively increase the level of empathy in the chat posts.
As part of this work, the researchers introduced PARTNER, a new deep reinforcement learning agent that learns to make edits to text to increase empathy in a conversation. Through a combination of automatic and human evaluation, the team demonstrated that PARTNER is capable of generating more empathetic, specific and diverse responses than what is currently being shared on TalkLife or what current machine learning models can provide.
“Rarely does mental health research have the practical application and potential for impact that we believe this research will have,” said Jamie Druitt, CEO of TalkLife. “This proposed research directly addresses real challenges and can be implemented to create measurable change on mental health platforms.”
This research has been supported in part by a Microsoft AI for Accessibility grant, the Allen Institute for Artificial Intelligence, NSF grant IIS-1901386, NIH grant R01MH125179, and Bill & Melinda Gates Foundation (INV-004841).
Congratulations Ashish, Tim, Inna and the rest of the team!
Each year, the University of Washington College of Engineering recognizes the dedication and drive of its students, teaching and research assistants, staff and faculty by honoring a select few with a College of Engineering Award. This year, the Allen School has three recipients: web computing specialist Della Welch won the Professional Staff Award, professor Linda Shapiro earned the College of Engineering Faculty Research Award and professor and alumnus Jon Froelich (Ph.D., 11) received the College of Engineering Outstanding Faculty Member Award.
Della Welch
The College recognized Welch for her excellent customer service, resourcefulness, innovation and creativity as a member of the Computer Science Laboratory group, which oversees all information technology assets and services across the Allen School. Welch first joined the Allen School in 2016 as an undergraduate working as a student lab assistant while studying information systems in the Foster School of Business. Impressed by her reliability and dedication to her job, the team hired her full-time after graduation as an education tools specialist. According to Dan Boren, the Allen School’s applications systems engineer, Welch surprised them with her work ethic while still a student.
“By her last year before graduation, Della had formed a detailed and comprehensive image of our business processes and challenges. She took the initiative to root out wasted effort and energy, and began to design processes and tools to boost the efficiency of our employees and the convenience of our customers,” he said. “It began with a simple, self-service touch screen application to let people check out loaner equipment. This clever tool that she built in her spare time proved that she had a wise and insightful perspective, and the decision was made to try to attract her as a full-time developer when she graduated.”
Shortly thereafter, construction on the Allen School’s second building, the Bill & Melinda Gates Center, finished up and Welch joined a team of her peers to undertake a monumental project: orchestrating the move of people, labs and equipment into the new space. According to Boren, Welch was everywhere they needed her, pulling cables from behind desks, using her systems administration skills, and educating herself in key areas such as user-interface design, modern software engineering methodologies and the latest application frameworks — all of which proved to be quite useful in early 2020 when the pandemic hit.
With the rapid move to remote work, Welch and her team had to rework admissions processes. She dived into what Boren said was an incredibly complex piece of software to modernize and streamline it, taking good user design and testability to new levels to get through the admissions season. What she created in an emergency ended up being more useful and maintainable than what was already in place. Welch also volunteered to write a software registry to be shared with and used by IT teams across the entire university.
“Through all of this, her systems administration skills have been invaluable, and somehow she finds the time to reprise her role as the cheerful help-desk person,” Boren said. “When you post an opening for a new hire, Della is the person you’re hoping will apply.”
Linda Shapiro
The College honored Shapiro, who holds a joint appointment in the Allen School and the Department of Electrical & Computer Engineering and is also an adjunct professor of biomedical informatics and medical education, for her extraordinary and innovative contributions to research and support of diverse students in research. Since she first arrived at the UW in 1986, Shapiro has cultivated a reputation as a highly regarded researcher in multiple technical fields, a creative and open collaborator across many disciplines and an accomplished, caring mentor. She has been working in these fields for 48 years, has written nearly 300 research papers and has supervised the Ph.D. theses of 48 doctoral students.
While Shapiro’s research career itself is highly impressive, her connection with her students is equally profound.
“Linda puts her students and their interests first. She is committed to helping them think broadly about research and personal goals and to strategically choose projects that further those goals,” said Magdalena Balazinska, professor and director of the Allen School. “This is demonstrated by her ability to define and promote new collaborations across groups, departments, and institutions, which help her students gain needed domain knowledge and technical expertise.”
In fields with few women pursuing Ph.D.’s, Shapiro has recruited and advised 22 female students. She is committed to increasing diversity in the College and has mentored undergraduates via the Distributed Research Experiences for Undergraduates (DREU) program, a highly selective national program whose goal is to increase the number of people from underrepresented groups that go to graduate school in the fields of computer science and engineering, for 15 years.
Shapiro earned a B.S. in mathematics from the University of Illinois (‘70), and an M.S. (‘72) and Ph.D. (‘74) in computer science from the University of Iowa, before joining the computer science faculty at Kansas State University in 1974. In 1979 she served on the CS faculty of Virginia Polytechnic Institute and State University for five years, then spent two years as the director of Intelligent Systems at Machine Vision International in Ann Arbor, Michigan, before joining the UW faculty in what was then known as the Department of Electrical Engineering. She joined the Allen School four years later.
The College honored Froehlich for his innovative and creative approaches to supporting remote learning and research during the pandemic — and for going above and beyond, in big and small ways, to support his community during this time.
Froehlich is the director of the Allen School’s Makeability Lab, where faculty and students design, build and study interactive tools and techniques to address pressing societal challenges. He also serves as the associate director of the Center for Research and Education on Accessible Technology (CREATE), an interdisciplinary center at the UW focused on making technology accessible and making the world more accessible through technology.
The courses Froehlich teaches and the research he conducts fundamentally depend on physical resources and learning experiences; even so, he did not let the pandemic stop him from offering the same quality of education to his students wherever they happened to be. Froehlich transformed his physical computing courses to virtual platforms and, along with Ph.D. student Liang He, assembled and mailed hardware kits for the courses to his students’ homes. He created a fully interactive website with tutorials and videos using a green screen in his home office, which he turned into a virtual teaching studio. Froehlich’s passion for teaching and dedication to making the experience better for his students were appreciated and noted by his students in their course evaluations.
In addition to his commitment to effectively teach his students regardless of their location, Froehlich also served as a strong mentor to them in a time when many felt isolated from family and friends.
“During this time, Jon has prioritized mental and physical health in his group and purchased additional resources to equip his graduate students’ homes with the equipment they require, including 3D printers, soldering irons, chairs, and computers,” said Balazinska. “Moreover, to enable students to continue their research agendas, he helped secure multiple remote internships for his students and has weekly one-on-ones with each student and “game hours” with his group to help his students relax, socially interact, and continue to bond during the pandemic.”
Froehlich has also been committed to helping colleagues improve their virtual classrooms. He co-created an international working group, “Teaching physical computing remotely,” which meets to discuss challenges and solutions for teaching physical and other computational craft courses online. He’s also co-chairing the 2022 International ACM SIGACCESS Conference on Computers and Accessibility and is committed to making the future of the conference a place not only for those with physical or sensory disabilities but for those with chronic illnesses, caretaking responsibilities or other commitments that prevent physical travel.
Froehlich joined the Allen School, his alma mater, as a professor in 2017. Before that, he was a professor of computer science at the University of Maryland, College Park. He previously earned his M.S. in Information and Computer Science at the University of California, Irvine.
Froehlich has earned a Sloan Research Fellowship and an NSF CAREER Award and has published more than 70 scientific peer-reviewed publications, including seven Best Papers and eight Best Paper Honorable Mentions. His team’s paper on Tohme earned a place on ACM Computing Reviews’ “Best of Computing 2014” list.
Source: National Institutes of Health Clinical Center*
Artificial intelligence promises to be a powerful tool for improving the speed and accuracy of medical decision-making to improve patient outcomes. From diagnosing disease, to personalizing treatment, to predicting complications from surgery, AI could become as integral to patient care in the future as imaging and laboratory tests are today.
But as Allen School researchers discovered, AI models — like humans — have a tendency to look for shortcuts. In the case of AI-assisted disease detection, such shortcuts could lead to diagnostic errors if deployed in clinical settings.
In a new paper published in the journal Nature Machine Intelligence, a team of researchers in the AIMS Lab led by Allen School professor Su-In Lee examined multiple models recently put forward as potential tools for accurately detecting COVID-19 from chest radiography (x-ray). They found that, rather than learning genuine medical pathology, these models rely instead on shortcut learning to draw spurious associations between medically irrelevant factors and disease status. In this case, the models ignored clinically significant indicators in favor of characteristics such as text markers or patient positioning that were specific to each dataset in predicting whether an individual had COVID-19.
According to graduate student and co-lead author Alex DeGrave, shortcut learning is less robust than genuine medical pathology and usually means the model will not generalize well outside of the original setting.
Alex DeGrave
“A model that relies on shortcuts will often only work in the hospital in which it was developed, so when you take the system to a new hospital, it fails — and that failure can point doctors toward the wrong diagnosis and improper treatment,” explained DeGrave, who is pursuing his Ph.D. in Computer Science & Engineering along with his M.D. as part of the University of Washington’s Medical Scientist Training Program (MSTP).
Combine that lack of robustness with the typical opacity of AI decision-making, and such a tool could go from potential life-saver to liability.
“A physician would generally expect a finding of COVID-19 from an x-ray to be based on specific patterns in the image that reflect disease processes,” he noted. “But rather than relying on those patterns, a system using shortcut learning might, for example, judge that someone is elderly and thus infer that they are more likely to have the disease because it is more common in older patients. The shortcut is not wrong per se, but the association is unexpected and not transparent. And that could lead to an inappropriate diagnosis.”
The lack of transparency is one of the factors that led DeGrave and his colleagues in the AIMS Lab to focus on explainable AI techniques for medicine and science. Most AI is regarded as a “black box” — the model is trained on massive data sets and spits out predictions without anyone really knowing precisely how the model came up with a given result. With explainable AI, researchers and practitioners are able to understand, in detail, how various inputs and their weights contributed to a model’s output.
Joseph Janizek
The team decided to use these same techniques to evaluate the trustworthiness of models that had recently been touted for what appeared to be their ability to accurately identify cases of COVID-19 from chest radiography. Despite a number of published papers heralding the results, the researchers suspected that something else may be happening inside the black box that led to the models’ predictions. Specifically, they reasoned that such models would be prone to a condition known as worst-case confounding, owing to the paucity of training data available for such a new disease. Such a scenario increased the likelihood that the models would rely on shortcuts rather than learning the underlying pathology of the disease from the training data.
“Worst-case confounding is what allows an AI system to just learn to recognize datasets instead of learning any true disease pathology,” explained co-lead author Joseph Janizek, who, like DeGrave, is pursuing a Ph.D. in the Allen School in addition to earning his M.D. “It’s what happens when all of the COVID-19 positive cases come from a single dataset while all of the negative cases are in another.
“And while researchers have come up with techniques to mitigate associations like this in cases where those associations are less severe,” Janizek continued, “these techniques don’t work in situations you have a perfect association between an outcome such as COVID-19 status and a factor like the data source.”
The team trained multiple deep convolutional neural networks on radiography images from a dataset that replicated the approach used in the published papers. They tested each model’s performance on an internal set of images from that initial dataset that had been withheld from the training data and on a second, external dataset meant to represent new hospital systems. The found that, while the models maintained their high performance when tested on images from the internal dataset, their accuracy was reduced by half on the second, external set — what the researchers referred to as a generalization gap and cited as strong evidence that confounding factors were responsible for the models’ predictive success on the initial dataset. The team then applied explainable AI techniques, including generative adversarial networks (GANs) and saliency maps, to identify which image features were most important in determining the models’ predictions.
The team used explainable AI to visualize the image factors that influenced neural network models’ predictions of COVID-19 status based on chest radiography. Here, saliency maps reveal the models’ tendency to emphasize diagnostically irrelevant features such as laterality tokens, image corners or the patient’s diaphragm in addition to — or instead of — the lung fields when making their predictions.^
When the researchers trained the models on the second dataset, which contained images drawn from a single region and was therefore presumed to be less prone to confounding, this turned out to not be the case; even those models exhibited a corresponding drop in performance when tested on external data. These results upend the conventional wisdom that confounding poses less of an issue when datasets are derived from similar sources — and reveal the extent to which so-called high-performance medical AI systems could exploit undesirable shortcuts rather than the desired signals.
Despite the concerns raised by the team’s findings, DeGrave said it is unlikely that the models they studied have been deployed widely in the clinical setting. While there is evidence that at least one of the faulty models – COVID-Net – was deployed in multiple hospitals, it is unclear whether it was used for clinical purposes or solely for research.
“Complete information about where and how these models have been deployed is unavailable, but it’s safe to assume that clinical use of these models is rare or nonexistent,” he noted. “Most of the time, healthcare providers diagnose COVID-19 using a laboratory test (PCR) rather than relying on chest radiographs. And hospitals are averse to liability, making it even less likely that they would rely on a relatively untested AI system.”
Janizek believes researchers looking to apply AI to disease detection will need to revamp their approach before such models can be used to make actual treatment decisions for patients.
“Our findings point to the importance of applying explainable AI techniques to rigorously audit medical AI systems,” Janizek said. “If you look at a handful of x-rays, the AI system might appear to behave well. Problems only become clear once you look at many images. Until we have methods to more efficiently audit these systems using a greater sample size, a more systematic application of explainable AI could help researchers avoid some of the pitfalls we identified with the COVID-19 models,” he concluded.
“My team and I are still optimistic about the clinical viability of AI for medical imaging. I believe we will eventually have reliable ways to prevent AI from learning shortcuts, but it’s going to take some more work to get there,” Lee said. “Going forward, explainable AI is going to be an essential tool for ensuring these models can be used safely and effectively to augment medical decision-making and achieve better outcomes for patients.”
The team’s paper, “AI for radiographic COVID-19 detection selects shortcuts over signal,” is one of two from the AIMS Lab to appear in the current issue of Nature Machine Intelligence. Lee is also the senior and corresponding author on the second paper, “Improving performance of deep learning models with axiomatic attribution priors and expected gradients,” for which she teamed up with Janizek, his fellow M.D.–Ph.D. student Gabriel Erion, Ph.D. student Pascal Sturmfels, and affiliate professor Scott Lundberg (Ph.D., ‘19) of Microsoft Research to develop a robust and flexible set of tools for encoding domain-specific knowledge into explainable AI models through the use of attribution priors. Their framework supports the widespread adoption of techniques that will improve model performance and increase computational efficiency in AI for medicine and other areas of applied machine learning.
* Image sourced from the National Institutes of Health (NIH) Clinical Center and used with permission: Wang, X. et al. “ChestX-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017.https://nihcc.app.box.com/v/ChestXray-NIHCC