In its latest round of funding intended to strengthen the United States of America’s leadership in artificial intelligence research, the National Science Foundation today designated a new NSF AI Institute for Future Edge Networks and Distributed Intelligence (AI-EDGE) that brings together 30 researchers from 18 universities, industry partners and government labs. Allen School professor Sewoong Oh is among the institute researchers who will spearhead the development of new AI tools and techniques to advance the design of next-generation wireless edge networks. The focus of AI-EDGE, which is led by The Ohio State University, will be on ensuring such networks are efficient, robust and secure.
Among the exciting new avenues Oh and his colleagues are keen to explore is the creation of tools that will enable wireless edge networks to be both self-healing and self-optimizing in response to changing network conditions. The team’s work will support future innovations in a variety of domains, from telehealth and transportation to robotics and aerospace.
“The future is wireless, which means much of the growth in devices and applications will be focused at the network edge rather than in the traditional network core,” Oh said. “There is tremendous benefit to be gained by building new AI tools tailored to such a distributed ecosystem, especially in making these networks more adaptive, reliable and resilient.”
AI-EDGE, which will receive $20 million in federal support over five years, is partially funded by the Department of Homeland Security. It is one of 11 new AI research institutes announced by the NSF today — including the NSF AI Institute for Dynamic Systems led by the University of Washington.
“These institutes are hubs for academia, industry and government to accelerate discovery and innovation in AI,” said NSF Director Sethuraman Panchanathan in the agency’s press release. “Inspiring talent and ideas everywhere in this important area will lead to new capabilities that improve our lives from medicine to entertainment to transportation and cybersecurity and position us in the vanguard of competitiveness and prosperity.”
Oh expects there will be synergy between the work of the new AI-EDGE Institute and the NSF AI Institute for Foundations in Machine Learning unveiled last summer to address fundamental challenges in machine learning and maximize the impact of AI on science and society. As co-PI of IFML, he works alongside Allen School colleagues Byron Boots, Sham Kakade and Jamie Morgenstern and adjunct faculty member Zaid Harchaoui, a professor in the UW Department of Statistics, in collaboration with lead institution University of Texas at Austin and other academic and industry partners to advance the state of the art in deep learning algorithms, robot navigation, and more. In addition to tackling important research questions with real-world impact, AI-EDGE and IFML also focus on advancing education and workforce development to broaden participation in the field.
Allen School alumna Rajalakshmi Nandakumar (Ph.D., ‘20), now a faculty member at Cornell University, received the SIGMOBILE Doctoral Dissertation Award from the Association for Computing Machinery’s Special Interest Group on Mobility of Systems Users, Data, and Computing “for creating an easily-deployed technique for low-cost millimeter-accuracy sensing on commodity hardware, and its bold and high-impact applications to important societal problems.” Nandakumar completed her dissertation, “Computational Wireless Sensing at Scale,” working with Allen School professor Shyam Gollakota in the University of Washington’s Networks & Mobile Systems Lab.
In celebrating Nandakumar’s achievements, the SIGMOBILE award committee highlighted “the elegance and simplicity” of her approach, which turns wireless devices such as smartphones into active sonar systems capable of accurately sensing minute changes in a person’s movements. The committee also heralded her “courage and strong follow-through” in demonstrating how her technique can be applied to real-world challenges — including significant public health issues affecting millions of people around the world.
Among the contributions Nandakumar presented as part of her dissertation was ApneaApp, a smartphone-based system for detecting a potentially life-threatening condition, obstructive sleep apnea, that affects an estimated 20 million people just in the United States alone. Unlike the conventional approach to diagnosing apnea, which involves an overnight stay in a specialized lab, the contactless solution devised by Nandakumar and her Allen School and UW Medicine collaborators could be deployed in the comfort of people’s homes. ApneaApp employs the phone’s speaker and microphone to detect changes in a person’s breathing during sleep, without requiring any specialized hardware. It works by emitting inaudible acoustic signals that are then reflected back to the device and analyzed for deviations in the person’s chest and abdominal movements. ResMed subsequently licensed the technology and made it available to the public for analyzing sleep quality via its SleepScore app.
Her early work on contactless sleep monitoring opened Nandakumar’s eyes to the potential for expanding the use of smartphone-based sonar to support early detection and intervention in the case of another burgeoning public health concern: preventable deaths via accidental opioid overdose. This led to the development of Second Chance, an app that a person activates on their smartphone to unobtrusively monitor changes in their breathing and posture that may indicate the onset of an overdose. Catching these early warning signs as soon as they occur would enable the timely administration of life-saving naloxone. Nandakumar’s colleagues created a startup, Sound Life Sciences, to commercialize this and related work that employs sonar to detect and monitor a variety of medical conditions via smart devices.
The SIGMOBILE Dissertation Award is the latest in a string of honors recognizing Nandakumar for her groundbreaking contributions in wireless systems research. She previously earned a Paul Baran Young Scholar Award from the Marconi Society, a Graduate Innovator Award from UW CoMotion, and a Best Paper Award at SenSys 2018.
“Rajalakshmi is brilliant, creative and fearless in her research. She repeatedly questions conventional wisdom and takes a different path from the rest of the community,” said Allen School professor Ed Lazowska, who supported Nandakumar’s nomination. “Her work points to the future — a future in which advances in computer science and computer engineering will have a direct bearing on our capacity to tackle societal challenges such as health and the environment. Rajalakshmi is a game-changer.”
Way to go, Rajalakshmi!
Photo credit: Sarah McQuate/University of Washington
Allen School professor Yejin Choi is among a team of researchers recognized by the Computer Vision Foundation with its 2021 Longuet-Higgins Prize for their paper “Baby talk: Understanding and generating simple image descriptions.” The paper was among the first to explore the new task of generating image captions in natural language by bridging two fields of artificial intelligence: computer vision and natural language processing. Choi, who is also a senior research manager at the Allen Institute for AI (AI2), completed this work while a faculty member at Stony Brook University. She and her co-authors originally presented the paper at the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Baby talk is the process by which adults assist infants in acquiring language and building their understanding of the world that is characterized in part by the use of grammatically simplified speech. Drawing upon this concept, Choi and her collaborators set out to teach machines to generate simple yet original sentences describing what they “see” in a given image. This was a significant departure from conventional approaches grounded in the retrieval and summarization of pre-existing content. To move past the existing paradigm, the researchers constructed statistical models for visually descriptive language by mining and parsing the large quantities of text available online and paired them with the latest recognition algorithms. Their strategy enabled the new system to describe the content of an image by generating sentences specific to that particular image, as opposed to requiring it to shoehorn content drawn from a limited document corpus into a suitable description. The resulting captions, the team noted, had greater relevance and precision in the way they describe the visual content.
“At the time we did this work, the question of how to align the semantic correspondences or alignments across different modalities, such as language and vision, was relatively unstudied. Image captioning is an emblematic task to bridge the longstanding gap between NLP research with computer vision,” explained Choi. “By bridging this divide, we were able to generate richer visual descriptions that were more in line with how a person might describe visual content — such as their tendency to include not just information on what objects are pictured, but also where they are in relation to each other.”
This incorporation of spatial relationships into their language generator was key in producing more natural-sounding descriptions. Up to that point, computer vision researchers who focused on text generation from visual content relied on spatial relationships between labeled regions of an image solely to improve labeling accuracy; they did not consider them outputs in their own right on a par with objects and modifiers. By contrast, Choi and her colleagues considered the relative positioning of individual objects as integral to developing the computer vision aspect of their system, to the point of using these relationships to drive sentence generation in conjunction with the depicted objects and their modifiers.
Some of the results were deemed to be “astonishingly good” by the human evaluators. In one example presented in the paper, the system accurately described a “gray sheep” as being positioned “by the gray road”; the “gray sky,” it noted, was above said road. For another image, the system correctly pegged that the “wooden dining table” was located “against the first window.” The system also accurately described the attributes and relative proximity of rectangular buses, shiny airplanes, furry dogs, and a golden cow — among other examples.
The Longuet-Higgins Prize is an annual “test of time” award presented during CVPR by the IEEE Pattern Analysis and Machine Intelligence (PAMI) Technical Committee to recognize fundamental contributions that have had a significant impact in the field of computer vision. Choi’s co-authors on this year’s award-winning paper include then-master’s students Girish Kulkarni, Visruth Premraj and Sagnik Dhar; Ph.D. student SiMing Li; and professors Alexander C. Berg and Tamara L. Berg, both now on the faculty at University of North Carolina Chapel Hill.
Allen School alumnus Adrian Sampson (Ph.D., ‘15) has been recognized by the IEEE Computer Society’s Technical Committee on Computer Architecture with the 2021 Young Computer Architect Award for “contributions to approximate computing and hardware synthesis from high-level representations.” This award honors an outstanding researcher who has completed their doctoral degree within the past six years and who has made innovative contributions to the field of computer architecture. Sampson, who completed his Ph.D. working with Allen School professor Luis Ceze and Allen School professor and vice director Dan Grossman, is now a faculty member in the Department of Computer Science at Cornell University in the Cornell Ann S. Bowers College of Computing and Information Science.
“Adrian’s work on programming language-hardware co-design for approximate computing led to a new research area with lots of follow-on research by the community,” said Ceze. “His research impact is complemented by him being an amazingly creative, caring and fun human being. I could not be more proud of him.”
Sampson devoted his early career to new abstractions in approximate computing, with a focus on rethinking modern computer architecture — specifically reducing the energy consumption of computer systems. For instance, Sampson, along with Ceze and Grossman, created EnerJ, a language for principled approximate computing that allows programmers to indicate where it is safe to permit occasional errors in order to save energy. While power consumption of computers is often strained by correctness, guarantees like EnerJ, an extension to Java, exposes hardware faults in a safe, principled manner allowing power-saving techniques like lower voltage. Sampson’s research shows that approximate computing is a promising way of saving energy in large classes of applications running on a wide range of systems, including embedded systems, mobile phones and servers.
“Modern architecture gives everyone a license to try ideas that are weird and ambitious and potentially transformative,” Sampson said about his work. “It would be silly not to take up that opportunity.”
At Cornell, Sampson and his research group, Capra, are focused on programming languages and compilers for generating hardware accelerators. One area in particular is field-programmable gate arrays (FPGAs), which allow the co-design of applications with hardware accelerators. These are still hard to program, so Sampson and his team created Dahlia, a programming language that leverages an affine type system to constrain programs to only represent valid hardware designs. Dahlia aims to compile high-level programming languages into performant hardware designs and offer open source tools to help programming language and architecture.
“What Adrian has leveraged in clever and impactful ways throughout his career is that many potential amazing performance advantages at the hardware level are possible only if given extra information or assurance from the software as to where the techniques are safe to use,” Grossman said.
In addition to his Computer Architecture Award, Sampson previously was recognized with an NSF CAREER Award in 2019 and a Google Faculty Research Award in 2016, just to name a few, and has published more than 40 papers.
Allen School professors Maya Cakmak and Dieter Fox, along with their collaborators at NVIDIA’s AI Robotics Research Lab, earned the award for Best Paper in Human-Robot Interaction at the IEEE International Conference on Robotics and Automation (ICRA 2021) for introducing a new vision-based system for the smooth transfer of objects between human and robot. In “Reactive Human-to-Robot Handovers of Arbitrary Objects,” the team employs visual object and hand detection, automatic grasp selection, closed-loop motion planning, and real-time manipulator control to enable the successful handoff of previously unknown objects of various sizes, shapes and rigidity. It’s a development that could put more robust human-robot collaboration within reach.
“Dynamic human-robot handovers present a unique set of research challenges compared to grasping static objects from a recognizable, stationary surface,” explained Fox, director of the Allen School’s Robotics and State Estimation Lab and senior director of robotics research at NVIDIA. “In this case, we needed to account for variations not just in the objects themselves, but in how the human moves the object, how much of it is covered by their fingers, and how their pose might constrain the direction of the robot’s approach. Our work combines recent progress in robot perception and grasping of static objects with new techniques that enable the robot to respond to those variables.”
The system devised by Fox, Cakmak, and NVIDIA researchers Wei Yang, Chris Paxton, Arsalan Mousavian and Yu-Wei Chao does not require objects to be part of a pre-trained dataset. Instead, it relies on a novel segmentation module that enables accurate, real-time hand and object segmentation, including objects the robot is encountering for the very first time. Rather than attempting to directly segment objects in the hand, which would not provide the flexibility and adaptability they sought, the researchers trained a fully convolutional network for hand segmentation given an RGB image, and then inferred object segmentation based on depth information. To ensure temporal consistency and stability of the robot’s grasps in response to changes in the user’s motion, the team extended the GraspNet grasp planner to refine the robot’s grasps over consecutive frames over time. This enables the system to react to a user’s movements, even after the robot has begun moving, while consistently generating grasps and motions that would be regarded as smooth and safe from a human perspective.
Crucially, the researchers’ approach places zero constraints on the user regarding how they may present an object to the robot; as long as the object is graspable by the robot, the system can accommodate its presentation in different positions and orientations. The team tested the system on more than two dozen common household objects, including a coffee mug, a remote control, a pair of scissors, a toothbrush and a tube of toothpaste, to demonstrate how it generalizes across a variety of items. That variety goes beyond differences between categories of object, as the objects within a single category can also differ significantly in their appearance, dimensions and deformability. According to Cakmak, this is particularly true in the context of people’s homes, which are likely to reflect an array of human needs and preferences to which a robot would need to adapt. To ensure their approach would have the highest utility in the home for users who need assistance with fetching and returning objects, the researchers evaluated their system using a set of everyday objects prioritized by people with amyotrophic lateral sclerosis (ALS).
“We may be able to pass someone a longer pair of scissors or a fuller plate of food without thinking about it — and without causing an injury or making a mess — but robots don’t possess that intuition,” said Cakmak, director of the Allen School’s Human-Centered Robotics Lab. “To effectively assist humans with everyday tasks, robots need the ability to adapt their handling of a variety of objects, including variability among objects of the same type, in line with the human user. This work brings us closer to giving robots that ability so they can safely and seamlessly interact with us, whether that’s in our homes or on the factory floor.”
ICRA is the IEEE’s flagship conference in robotics. The 2021 conference, which followed a hybrid model combining virtual and in-person sessions in Xi’an, China, received more than 4,000 paper submissions spanning all areas of the field.
Visit the project page for the full text of the research paper, demonstration videos and more.
Let’s give a big hand to Maya, Dieter and the entire team!
Allen School professor Emina Torlak, co-founder of the UNSAT group and a member of the Programming Languages & Software Engineering (PLSE) group, earned the 2021 Robin Milner Young Researcher Award from the Association for Computing Machinery’s Special Interest Group on Programming Languages (ACM SIGPLAN) for her research contributions in automated reasoning to make computer programming easier. The award was named in honor of Milner, the legendary British computer scientist who was a leader in programming language research and had a passion for mentoring his younger colleagues. Torlak is the 10th recipient of the award, which is given to a researcher within 20 years of their start of graduate school.
“This prestigious award is explicit recognition of what is well-known in the programming-languages research community: Emina builds beautifully engineered systems that are ready for serious use not just by her group but by other researchers, and she does so by extending the elegant logical foundations of our field in sophisticated ways,” said Dan Grossman, professor and vice director of the Allen School. “Her tools don’t just work; they do things nobody else knows how to do.”
In her work, Torlak combines new language abstractions, algorithmic insights, robust design, and thorough implementation. From the start of her career as a doctoral student at MIT, she has been building tools and applications that find and solve problems in programming languages.
“Automatic verification of programs ensures that programs are free of bugs that hackers could exploit to break into the system. Unfortunately, verifying program correctness is a specialized task that programmers are not trained to perform,” said Allen School professor Rastislav Bodik. “Emina’s research goal has been to make automatic verification of programs accessible to programmers who lack expertise in verification. She has accomplished this vision by automating a lot of the verification process, in a clever way. Her key insight was that it was impossible to construct a super-verifier that would automatically verify all kinds of programs, and that to automate the process, verifiers would need to be tailored to narrower classes of programs. She designed a tool that allowed programmers to automatically construct such automatic verifiers for their programs.”
An example is Torlak’s Rosette framework, which allows users to quickly develop their own domain-specific program synthesis algorithms.
“Rosette is an ingenious design that combines advances from programming languages such as meta-programming, compilation as in partial evaluation and formal methods such as symbolic computation,” Bodik said. “Using Rosette, programmers were able to construct verifiers in a few days where it would take months to do so with traditional methods. Rosette was also used to construct synthesizers of programs, which are tools that automatically write programs. Using Rosette, researchers have built synthesizers that reached parity with human programming experts on a range of advanced programming tasks. This success owes both to Emina’s design skills and the leadership in supporting the open-source community around Rosette.”
Leveraging Rosette, Torlak helped to create Serval, a framework for creating automated verifiers for systems software. It overcomes several obstacles in scaling automated verification, including the developer effort required to write verifiers, the difficulty of finding and fixing performance bottlenecks, and limitations on their applicability to existing systems. From there, she contributed to the development of Jitterbug, a tool for writing and proving the correctness of just-in-time (JIT) compilers for computers using the extended Berkeley Packet Filter (eBPF) technology. It produces a formal correctness guarantee for eBPF JITs, a security-critical component of the Linux operating system, through a precise specification of JIT correctness and an automated verification strategy that scales to practical implementations.
“Emina Torlak is a leader in the area of automated verification. She has made both conceptual contributions and built tools that are state-of-the-art and widely used,” said the Milner Award selection committee. “On the conceptual side, the notion of a solver-aided programming language is hers and has quickly become standard terminology.”
One of the core problems underpinning the theory and practice of computer science that extends to multiple other domains, the Traveling Salesperson Problem (TSP) seeks to find the most efficient route over multiple destinations and back to the starting point. Short of finding the optimum, computer scientists have relied on approximation algorithms to chip away at the problem by finding an answer that is, as Klein described in a piece he wrote for The Conversation, “good enough.” As the name suggests, TSP has obvious applications in transportation and logistics. But, Klein explained, a solution will have implications for multiple other domains, from genomics to circuit board design.
“This is important for more than just planning routes,” Klein wrote. “Any of these hard problems can be encoded in the Traveling Salesperson Problem, and vice versa: Solve one and you’ve solved them all.”
Klein and his colleagues may not have completely solved the problem, but they brought the field an important — and hugely symbolic — step closer. Their algorithm, which they revealed to the world last summer, is the first to break the elusive 50% threshold for TSP — meaning the cost of its result will always be less than 50% above the optimum. While their result represents only a marginal improvement over the previous state of the art in real terms, at roughly 49.99999%, the team’s achievement offers a foundation on which they and their fellow researchers can build in future work. And build they most assuredly will; according to Karlin, once a researcher encounters TSP, it is very difficult to set it aside.
“One of my favorite theoretical computer scientists, Christos Papadimitriou, summed it up perfectly,” observed Karlin. “He said, ‘TSP is not a problem. It is an addiction.’”
Her faculty colleague Oveis Gharan knows the feeling well, having spent the past 10 years tackling various elements of TSP. For him, this milestone is another step forward on an open-ended journey — and one he is keen to delve into more deeply now that the dust has settled and the proof has checked out.
“A natural follow-up to our result would be to find a simpler proof with a significantly bigger improvement over the 50%, but to be honest, that is not really what interests me,” said Oveis Gharan. “Often, when someone breaks a long-standing barrier, they develop or employ several new tools or ideas in the process. These typically are not well understood in their full generality, because they are mainly specialized for that specific application. So, what I want to do after such a result is to better understand these new tools and ideas. Why do they work? What other applications can they possibly have? Can we generalize them further? Is there any other open problem that they can help us address? We’re exploring these questions now with regard to some of the tools that we used in this TSP paper.”
Lin’s work with Jain and Sahai on indistinguishability obfuscation (iO) also represents a discovery that was decades in the making. In this case, the researchers proved the security of a potentially powerful cryptographic scheme for protecting computer programs from would-be hackers by making them unintelligible to adversaries while retaining their functionality.
The team’s breakthrough paper demonstrates that provably secure iO is constructed from subexponential hardness of four well-founded assumptions: Symmetric External Diffie-Hellman (SXDH) on pairing groups, Learning with Errors (LWE), Learning Parity with Noise (LPN) over large fields, and a Boolean Pseudo-Random Generator (PRG) that is very simple to compute. All four have a long history of study that is well-rooted in complexity, coding and number theory — enabling Lin and her collaborators to finally put a cryptographic approach that was first described in 1976, and mathematically formalized 20 years ago, on a firm footing. The researchers also devised a simple new method for leveraging LPN over fields and PRG in NC0 — a simple model of computation in which every output bit depends on a constant number of input bits — to build a structured-seed PRG (sPRG). The seed, which comprises both a public and private part, maintains the scheme’s pseudo-randomness in cases where an adversary is able to see the public seed and the sPRG output.
Now that they have verified the security of iO based on well-founded assumptions, Lin and her collaborators are already working to extend their results — and they are eager to see others expand upon them, as well.
“The next step is further pushing the envelope and constructing iO from weaker assumptions,” Lin said when the team announced its results. “At the same time, we shall try to improve the efficiency of the solutions, which at the moment is very far from being practical. These are ambitious goals that will need the joint effort from the entire cryptography community.”
STOC is organized by the Association for Computing Machinery’s Special Interest Group on Algorithms and Computation Theory (ACM SIGACT).
What if you could engage with the world — go to class, do your job, meet up with friends, get a workout in — without leaving the comfort of your bedroom thanks to mixed-reality technology?
What if you could customize your avatar in many ways, but you couldn’t make it reflect your true racial or cultural identity? What if the best way to get a high-paying job was based on your access to this technology, only you are blind and the technology was not designed for you?
And what if notions of equity and justice manufactured in that virtual world — at least, for some users — still don’t carry over into the real one?
These and many other questions come to mind as one reads “Our Reality,” a new science-fiction novella authored by Tadayoshi (Yoshi) Kohno, a professor in the University of Washington’s Paul G. Allen School of Computer Science & Engineering. Kohno, who co-directs the UW’s Security and Privacy Research Lab and Tech Policy Lab, has long regarded science fiction as a powerful means through which to critically examine the relationship between technology and society. He even dealt previously with the issue in a short piece he contributed to the Tech Policy Lab’s recent anthology, “Telling Stories,” which explores culturally-responsive artificial intelligence (AI).
“Our Reality,” so named for the mixed-reality technology he imagines taking hold in the post-pandemic world circa 2034, is Kohno’s first foray into long-form fiction. The Allen School recently sat down with Kohno with the help of another virtual technology — Zoom — to discuss his inspiration for the story, his personal as well as professional journey to examining matters of racism, equity and justice, and how technology meant to help may also do harm.
First of all, congratulations on the publication of your first novella! As a professor of computer science and a well-known cybersecurity researcher, did you ever expect to be writing a work of science fiction?
Yoshi Kohno: I’ve always been interested in science fiction, ever since I was a kid. I’ve carried that fascination into my career as a computer science educator and researcher. Around 10 years ago, I published a paper on science fiction prototyping and computer security education that talked about my experiences asking students in my undergraduate computer security course to write science fiction as part of the syllabus. Writing science fiction means creating a fictional world and then putting technology inside that world. Doing so forces the writer — the students, in my course — to explore the relationship between society and technology. It has been a dream of mine to write science fiction of my own. In fact, when conducting my research — on automotive security, or cell phone surveillance, or anything else — I have often discussed with my students and colleagues how our results would make good plot elements for a science fiction story! The teenage me would be excited that I finally did it.
What was the inspiration for “Our Reality,” and what made you decide now was the right time to realize that childhood dream?
YK: Recently, many people experienced a significant awakening to the harms that technology can bring and to the injustices in society. We, as a society and as computer scientists, have a responsibility to address these injustices. I have written a number of scientific, scholarly papers. But I realized that some of the most important works in history are fiction. Think about the book “1984,” for example, and how often people quote from it. Now, I know my book will not be nearly as transformative as “1984,” but the impact of that book and others inspired me.
I was particularly disturbed by how racism can manifest in technologies. As an educator, I believe that it is our responsibility to help students understand how to create technologies that are just, and certainly that are not racist. I wrote my story with the hope that it would inform the reader’s understanding of racism and racism in technology. At the same time, I also want to explicitly acknowledge that there are already amazing books on the topic of racism and technology. Consider, for example, Dr. Ruha Benjamin’s book “Race After Technology,” or Dr. Safiya Umoja Noble’s book “Algorithms of Oppression.” I hope that readers committed to addressing injustices in society and in technology read these books, as well as the other books I cite in the “Suggested Readings” section of my novella.
In addition to racism in technology, my story also tries to surface a number of other important issues. I sometimes intentionally added flaws to the technologies featured in “Our Reality” — flaws that can serve as starting points for conversations.
How did your role as the Allen School’s Associate Director for Diversity, Equity, Inclusion and Access inform your approach to the story?
YK: I view diversity, equity, inclusion, and access as important to me as an individual, important for our school, and important for society. I’ve thought about the connection between society and technology throughout my career. As a security researcher and educator, I have to think about the harms that technologies could bring. But when I took on the role of Associate Director, I realized that I had a lot more learning to do. I devoured as many resources as I could, I talked with many other people with expertise far greater than my own, I read many books, I enrolled in the Cultural Competence in Computing (3C) Fellows Program, and so on. This is one of the things that we as a computer science community need to always be doing. We should always be open to learning more, to questioning our assumptions, and to realizing that we are not the experts.
One of the things that I’ve always cared about as an educator was helping students understand not only the relationship between society and technology, but to recognize how technologies can do harm. I’m motivated to get people thinking about these issues and proactively working to mitigate those harmful impacts. If people are not already thinking about the injustices that technologies can create, then I hope that “Our Reality”can help them start.
One of the main characters in “Our Reality,” Emma, is a teenage Black girl. How did you approach writing a character whose identity and experiences would be so different from your own?
YK: Your question is very good. I have several answers that I want to give. First, this question connects to one of the teaching goals of “Our Reality” and, in particular, to Question 6 in the “Questions for Readers” section of the novella. Essentially, how should those who design technologies approach their work when the users or affected stakeholders might have identities very different from their own? And how do they ensure that they don’t rely on stereotypes or somehow create an injustice for those users or affected stakeholders? Similarly, in writing “Our Reality,” and with my goal of contributing to a discussion about racism in technology, I found myself in the position of centering Emma, a teenage girl who is Black. This was a huge responsibility, and I knew that I needed to approach this responsibility respectfully and mindfully. My own identity is that of a cisgender man who is Japanese and white.
The first thing I did towards writing Emma was to buy the book “Writing the Other“ by Nisi Shawl and Cynthia Ward. It is an amazing book, and I’d recommend it for anyone else who wishes to either write stories that include characters with identities other than their own. I read their book last summer. I then discovered that Shawl was teaching a year-long course in science fiction writing through the Hugo House here in Seattle, so I enrolled. That taught me a significant amount about writing science fiction and writing about diverse characters.
I should acknowledge that while I tried the best I could, I might have made mistakes. While I have written many versions of this story, and while I’ve received amazingly useful feedback along the way, any mistakes that remain are mine and mine alone.
In your story, Our Reality is also the name of a technology that allows users to experience a virtual, mixed-reality world. One of the other themes that stood out is who has access to technologies like Our Reality — the name implies a shared reality, but it’s only really shared by those who can afford it. There are the “haves” like Emma and her family, and the “have nots” like Liam and his family. How worried are you that unequal access to technology will exacerbate societal divides?
YK: I’m quite worried that technology will exacerbate inequity along many dimensions. In the context of mixed-reality technology like Our Reality, I think there are three reasons that a group of people might not access it. One is financial; people may not be able to afford the mixed-reality Goggles, the subscription, the in-app purchases, and so on. They either can’t access it at all or will have unequal access to its features, like Liam discovered when he tried to customize his avatar beyond the basic free settings. The second reason is because the technology was not designed for them. I alluded to this in the story when I brought up Liam’s classmate Mathias, who is blind. Finally, some people will elect not to access the technology for ideological reasons. When I think about the future of mixed-reality technologies, like Our Reality, I worry that society will further fracture into different groups, the “haves” or “designed fors” and the “have nots” or “not designed fors.”
Emma’s mother is a Black woman who holds a leadership position in the company that makes Our Reality, but her influence is still limited. For example, Emma objects to the company’s decision to make avatars generic and “raceless,” which means she can’t fully be herself in the virtual world. What did you hope people would take away from that aspect of the story?
YK: First, this is an example of one of the faults that I intentionally included in the technologies in “Our Reality.” I also want to point the reader to the companion document that I prepared, which describes in more detail some of the educational content that I tried to include in Our Reality. Your question connects to so many important topics, such as the notion of “the unmarked state” — the default persona that one envisions if they are not provided with additional information — as well as colorblind racism. This also connects to something that Dr. Noble discusses in the book “Algorithms of Oppression”and which I tried to surface in “Our Reality” — that not only do we need to increase the diversity within the field, but we need to overcome systemic issues that stand in the way of fully considering the needs of all people, and systemic inequities, in the design of technologies.
Stepping back, I am hoping that readers start to think about some of these issues as they read “Our Reality.” I hope that they realize that the situation described in “Our Reality” is unjust and inequitable. I hope they read the companion document, to understand the educational content that I incorporated into “Our Reality.” And then I hope that readers are inspired to read more authoritative works, like “Algorithms of Oppression” and the other books that I reference in the novella and in the companion document.
You and professor Franziska Roesner, also in the Allen School, have done some very interesting research with your students in the UW Security and Privacy Research Lab. Your novella incorporates several references to issues raised in that research, such as tracking people via online advertising and how to safeguard users of mixed-reality technologies from undesirable or dangerous content. It almost feels uncomfortably close to your version of 2034 already. So how can we as a society, along with our regulatory frameworks, catch up and keep up with the pace of innovation?
YK: Rather than having society and our regulatory frameworks catch up to the pace of innovation, we should consider slowing the pace of innovation. Often there is a tendency to think that technology will solve our problems; if we just build the next technology, things will be great, or so it seems like people often think. Instead of perpetuating that mentality, maybe we should slow down and be more thoughtful about the long-term implications of technologies before we build them — and before we need to put any regulatory framework in place.
As part of changing the pace of innovation, we need to make sure that the innovators of technology understand the broader society and global context in which technologies exist. This is one of the reasons why I appreciate the Cultural Competence in Computing (3C) Fellows Program coming out of Duke so much, and why I encourage other educators to apply. That program was created by Dr. Nicki Washington, Dr. Shaundra B. Daily, and graduate assistant Cecilé Sadler at Duke University. The program’s goal is to help empower computer science educators, throughout the world, with the knowledge and skills necessary to help students understand the broader societal context in which technologies sit.
As an aside, one of the reasons that my colleagues Ryan Calo in the School of Law and Batya Friedman in the iSchool and I co-founded the Tech Policy Lab at the University of Washington is that we understood the need for policymakers and technologists to also come together and explore issues at the intersection between society, technology, and policy.
Speaking of understanding context, in the companion document to “Our Reality” you note “Computing systems can make the world a far better place for some, but a far worse one for others.” Can you elaborate?
YK: There are numerous examples of how technologies, when one looks at them from a 50,000 foot perspective, might seem to be beneficial to individuals or society. But when one looks more closely at the specific case of specific individuals, you find that they’re not providing a benefit; in fact, they have the potential to actively cause harm. Consider, for example, an app that helps a user find the location of their family or friends. Such an app might seem generally beneficial — it could help a parent or guardian find their child if they get separated at a park. But now consider situations of domestic abuse. Someone could use that same technology to track and harm their victim.
Another example, which I explored in “Our Reality” through Emma’s encounter with the police drones, is inequity across different races. Face detection and recognition systems are now widely understood to be inequitable because they have decreased accuracy with Black people compared to white people. This is incredibly inequitable and unjust. I encourage readers to learn more about the inequities with face detection and face recognition. One great place to start is the film “Coded Bias” directed and produced by Shalini Kantayya, which centers MIT Media Lab researcher Joy Buolamwini.
At one point, Emma admonishes her mother, a technologist, that she can’t solve everything with technology. How do we determine what is the best use of technology, and what is the responsibility of your colleagues who are the ones inventing it?
YK: I think that it is absolutely critical for those who are driving innovation to understand how the technology that they create sits within a broader society and interacts with people, many of whom are different from themselves. I referred earlier to this notion of a default persona, also called the “unmarked state.” Drawing from Nisi Shawl and Cynthia Ward’s book “Writing the Other,” this is more often than not someone who is white, male, heterosexual, single, young, and with no disabilities. Not only should one be thinking about how a technology would fit in the context of society, but also consider it in the context of the many people who do not identify with this default persona.
On top of that, when designing technologies for someone “not like me,” people need to be sure they are not invoking stereotypes or false assumptions about those who are not like themselves. There’s a book called “Design Justice” by Dr. Sasha Costanza-Chock about centering the communities for whom we are designing. As technologists, we ought to be working with those stakeholders to understand what technologies they need. And we shouldn’t presume that any specific technology is needed. It could be that a new technology is not needed.
Some aspects of Our Reality sound like fun — for example, when Emma and Liam played around with zero gravity in the science lab. If you had the opportunity and the means to use Our Reality, would you?
YK: I think it is an open research question about what augmented reality and mixed-reality technologies will be like in the next 15 years. I do think that technologies like Our Reality will exist in the not-too-distant future. But I hope that the people developing these technologies will have addressed the access questions and societal implications that I raised in the story. As written, I think I would enjoy aspects of the technology, but I would not feel comfortable using it if the equity issues surrounding Our Reality aren’t addressed.
Stepping even further back, there are a whole class of risks with mixed-reality technologies that are not deeply surfaced in this story: computer security risks. This is a topic that Franziska Roesner and I have been studying at UW for about 10 years, along with our colleagues and students. There are a lot of challenges to securing future mixed-reality platforms and applications.
So you would be one of those ideological objectors you mentioned earlier.
YK: I would, yes. And, in addition to issues of access and equity and the various security risks, I used to also be a yoga instructor. I like to see and experience the world through my real senses. I fear that mixed-reality technologies are coming. But for me, personally, I don’t want to lose the ability to experience the world for real, rather than through Goggles.
Who did you have in mind as the primary audience for “Our Reality”?
YK: I had several primary audiences, actually. In a dream world, I would love to see middle school students reading and discussing “Our Reality” in their social studies classes. I would love for the next generation to start discussing issues at the intersection of society and technology before they become technologists. If students discuss these issues in middle school, then maybe it will become second nature for them to always consider the relationship between society and technology, and how technologies can create or perpetuate injustices and inequities.
I would also love for high school and first- and second-year college students to read this story. And, of course, I would love for more senior computer scientists — advanced undergraduate students and people in industry — to read this story, too. I also hope that people read the books that I reference in the Suggested Readings section of my novella and the companion document. Those references are excellent. My novella scratches the surface of important issues, and provides a starting point for deeper considerations; the books that I reference provide much greater detail and depth.
As an educator, I wanted the story to be as accessible as possible, to the broadest audience possible. That’s why I put a free PDF of the novella on my website. I also put a PDF of the companion document on my web page. I wrote the companion document in such a way that I hope it will be useful and educational to people even if they never read the “Our Reality”novella.
What are the main lessons you hope readers will take away from “Our Reality”?
YK: I hope that readers will understand the importance of considering the relationship between society and technology. I hope that readers will understand that it is not inevitable that technologies be created. I hope that readers realize that when we do create a technology, we should do so in a responsible way that fully acknowledges and considers the full range of stakeholders and the present and future relationships between that technology and society.
Also, I tried to be somewhat overt about the flaws in the technologies featured in “Our Reality.” As I said earlier, I intentionally included flaws in the technologies in “Our Reality,” for educational purposes. But when one interacts with a brand new technology in the real world, sometimes there are flaws, but those flaws are not as obvious. I would like to encourage both users and designers of technology to be critical in their consideration of new technologies, so that they can proactively spot those flaws from an equity and justice perspective.
If my story reaches people who have not been thinking about the relationship between society, racism, and technology already, I hope “Our Reality” starts them down the path of learning more. I encourage these readers to look at the “Our Reality”companion document, and explore some of the other resources that I reference. I would like to also thank these readers for caring about such an important topic.
Readers may purchase the paperback or Kindle version of “Our Reality” on Amazon.com, and access a free downloadable PDF of the novella, the companion document, and a full list of resources on the “Our Reality” webpage.
Paul Beame, an associate director of the Allen School and a professor in the Theory of Computation group, has been honored with the 2021 ACM SIGACT Distinguished Service Award for his more than 20 years of dedicated and effective support to the theoretical computer science community. Each year, the Association for Computing Machinery’s Special Interest Group on Algorithms and Computation Theory presents the award to an individual or group who has gone above and beyond in service to their colleagues even as they work to advance the foundations of computing.
Since joining the University of Washington faculty in 1987, Beame has devoted much of his research career to exploring pure and applied complexity to address a range of computational problems in multiple domains. Beyond his research contributions, Beame has dedicated significant time and effort to promoting knowledge-sharing and collaboration within the theoretical computer science community through various leadership roles in ACM, SIGACT, and its sibling organization within the IEEE Computer Society, the Technical Committee on Mathematical Foundations (TCMF). He is also credited with advancing two of the field’s flagship conferences, the ACM Symposium on the Theory of Computing (STOC) and the IEEE Symposium on Foundations of Computer Science (FOCS), through a combination of formal and informal leadership and service.
FOCS 1999 marked Beame’s first foray into conference leadership, when he took on the role of program committee chair and served informally as a local co-organizer. He subsequently went on to serve as general chair, finance chair, and/or local co-organizer for no fewer than half a dozen more FOCS and STOC conferences between 2006 and 2011. Beyond fulfilling his official leadership duties, over the years Beame has provided organizers of numerous successive conferences with unofficial assistance ranging from identifying potential venues, to recruiting local volunteers, to addressing issues with the online registration system.
“Paul’s tireless service, in both official and unofficial roles, has been highly instrumental in ensuring the smooth running and continued vitality of the flagship STOC and FOCS conferences and of SIGACT itself,” said ACM SIGACT Chair Samir Khuller, the Peter and Adrienne Barris Professor and Chair of the Computer Science Department at Northwestern University. “In addition to his many official roles, Paul’s influence is also felt, equally significantly, through his unofficial roles as ‘SIGACT oracle’ and advisor to those responsible for running our main conferences every year.”
Following the completion of his term as chair of IEEE’s TCMF in 2012, Beame began his tenure as chair of ACM SIGACT. During the next six years, as he served out his term as chair and immediate past chair of the group, Beame is credited with promoting greater cooperation between the two groups that represented the theoretical computer science community in parallel. He was also instrumental in revitalizing STOC through the introduction of TheoryFest, for which he co-planned and co-organized the first two events in 2017 and 2018.
As a leader, Beame is known for building in mechanisms for ensuring good management practices will carry forward while building up the infrastructure for running conferences and committees that enables his successors to seamlessly take the reins. And he hasn’t been shy about challenging the conventional wisdom along the way.
“There are typically two distinct types of exceptional leaders for professional organizations,” observed David Shmoys, Laibe/Acheson Professor at Cornell University. “First, there are the people who are detail-focused and excel at making sure that the myriad of lower-level processes work with the precision of a well-engineered system — in a pre-digital world, the standard metaphor would be a Swiss watch. Second are the ‘out-of-the-box’ thinkers who aren’t content with making things run as well as possible within the framework of the status quo, but instead push the organization to improve upon itself by inventing new ways that the systems can run. Paul excels in both dimensions, and this is what makes his service extraordinarily remarkable.”
Even after officially passing the baton to the next slate of leaders and volunteers, Beame continues to be generous with his time and advice to help sustain the community’s momentum.
“Whereas many people who have served in leadership roles view stepping down as a chance to leave obligations behind,” Khuller noted, “Paul has repeatedly viewed it as an opportunity to smooth the path for future incumbents.”
In addition to laying the groundwork for future leaders, Beame was also a vocal champion of open access to SIGACT and other ACM conference proceedings during his time representing SIGACT and the SIG Board. Despite entrenched opposition from some quarters, his position ultimately prevailed. Beginning in 2015, all SIGACT and many other SIG conference proceedings were made open-access in perpetuity via their respective conference websites — including the STOC website, which Beame himself initially built and maintained along with the FOCS website to provide an online history of past conferences and their proceedings.
“This award is a well-earned honor for Paul, who has been tireless in helping SIGACT move forward,” said Allen School professor emeritus Richard Ladner.
Ladner’s faculty colleague Anna Karlin agreed. “For two decades, Paul has repeatedly stepped up to provide every manner of service to the SIGACT community,” she said. “He is an exemplary recipient of this award.”
Yasaman Sefidgar, a Ph.D. student working with Allen School professor James Fogarty, has been named a 2021 Facebook Fellow for her research developing computational and data-driven systems that inform and support social and health interventions. Sefidgar’s work currently is focused on studying the powerful effects of social support and how to make it more accessible with online platforms, especially during difficult times.
Sefidgar aims to use her Fellowship to understand the emotional needs of people online — specifically among minority populations who experience particular incidents like microaggressions. Her findings will help her design systems that can aid people coping with distress. In her previous work with students struggling with microaggression, she found that those who successfully worked through it bounced back quickly after seeking support from others who had similar experiences and could offer guidance. The earlier they found this support, the better their recovery. Sefidgar wants to connect people based on their experiences and the impact the connection makes, then guide the social interaction dynamics to make the connection a positive one.
After studying insights from interviews and a large dataset, and building a framework to analyze the collected stories and encounters, Sefidgar will curate algorithms to build a platform that can support more specific needs to promote well-being.
“We need additional knowledge of mechanisms of effective social support in situations of interest. My mixed-method research addresses both fronts,” said Sefidgar. “Qualitative accounts of how individuals currently seek support would highlight the challenges and barriers that technology solutions might address. They can also inform quantitative analysis that not only confirms the qualitative insights but also brings to light the mechanistic elements of effective support.”
Sefidgar said that Facebook has the relevant resources to improve social support for well-being by leveraging the day-to-day experiences shared on its platforms to new recommendation policies and content curation. The company can also implement interventions that improve conversations.
“I hope findings from my work can inform the design of interfaces, interactions, and algorithms on Facebook and Instagram with the potential to benefit millions of users,” she said.
Before coming to the Allen School, Sefidgar obtained her B.S. in computer engineering from the Sharif University of Technology and M.S. degrees in human-computer interaction from the University of British Columbia and in computer vision from Simon Fraser University. She has co-authored 10 publications, including a Best Paper Honorable Mention for “Situated Tangible Robot Programming” from the 2017 International Conference on Human-Robot Interaction.
Sefidgar is one of only 21 doctoral students worldwide chosen to receive Facebook Fellowships this year based on their innovative research. Past Allen School recipients of the Facebook Fellowship include Minjoon Seo (2019), James Bornholt and Eunsol Choi (2018), and Aditya Vashistha (2016).