Skip to main content

Allen School celebrates diversity and inclusion

Grace Hopper attendees
Allen School representatives at the Grace Hopper Celebration of Women in Computing.

As a community committed to diversity and inclusion, the Allen School celebrates and values differences in its members. Yesterday (Oct. 10), the School held its annual diversity in computing reception, a favorite event highlighting the School’s broadening participation in organizations that honor diversity in computing.

Students, faculty and staff that attended the Grace Hopper Celebration of Women in Computing  earlier in October and the ACM Richard Tapia Celebration of Diversity in Computing in late September were recognized.

The Grace Hopper Celebration, held this year in Orlando, Florida, is the world’s largest gathering of women technologists and focuses on helping women grow, learn and develop to their highest potential.

Tapia conference attendees
Allen School attendees at the Richard Tapia Celebration of Diversity in Computing.

“I loved attending Grace Hopper this year. So many of the talks were so inspiring and gave me hands-on tools to approach challenges I face as a woman in tech. It was a big confidence booster, and I had the chance to meet so many amazing women in my field,” said Amanda Baughan, a graduate student in the Allen School. “It’s inspired me to tackle more difficult problems and reach for goals I may have second-guessed my own abilities in achieving previously.”

Aishwarya Mandyam
Jodi Tims, Chair of ACM-W; Allen School student Aishwarya Mandyam; Vidya Srinivasan and Sheila Tejada, co-chairs of the Grace Hopper Celebration.

Allen School student Aishwarya Mandyam was honored at the Hopper Celebration for her work, winning second place internationally in the ACM Student Research Competition.

The Tapia Celebration, held in San Diego this year, brings together people of all backgrounds, abilities and genders to recognize, celebrate and promote diversity in computing.

“I got to learn about thriving opportunities for a diverse workforce in tech and discovered it as a great platform for me to completely embrace distinct identities of myself–a woman of color, a first-gen college student, an immigrant with all transitioning struggles bolstered–and was able to find my own ground in this highly challenging field,” said Radia Karim, a junior in the Allen School. “I was really moved by the conference’s agenda and with the extremely bold and diverse Tapia attendees who have been consistently defining their own footprints in tech rising above all odds and making the future more welcoming.”

Recognizing those in attendance at these conferences, Ed Lazowska, the Bill & Melinda Gates Chair in the Allen School, said that the two conferences highlight the School’s core values and its commitment to diversity and inclusion.

“The Allen School has been widely recognized as a leader in promoting gender diversity in computing. In addition to our strides in our student body, I want to note that over the past 9 years, our faculty has grown by 29, and 15 of these are women – an amazing record for which Hank Levy deserves a great deal of credit,” he said. “In the past few years we’ve dramatically increased the attention we devote to underrepresented minority students and students from low-income backgrounds, at both the undergraduate and graduate levels.”

The Allen School has partnered with the College of Engineering’s STARS program and the state’s AccessCSforAll; students from both were also recognized.

Kimberly Ruth
Lisa Simonyi Prize recipient Kim Ruth and incoming Allen School Director Magda Balazinska.

During the reception, Kimberly Ruth, an Allen School senior, was awarded the Lisa Simonyi Prize. The prize was established by Lisa and Charles Simonyi, for students who exemplify the commitment to excellence, leadership, and diversity to which the School aspires. Ruth is an exceptionally talented and dedicated student. She is a member of UW’s Interdisciplinary Honors program and is a dual major in computer engineering and mathematics. Not only is she engaged in research with Allen School professors Franzi Roesner and Yoshi Kohno in the Security & Privacy Lab, but she has also been awarded the 2018 Goldwater Scholarship and the 2017 Washington Research Foundation Fellowship. She has served as a tutor for four years in a program that teaches math and Python programming to middle and high school students and founded Go Figure, an initiative to get middle school students excited about math. Last year, she was named a “Husky 100,” an annual program that honors UW students who are impacting the University community positively.

As professor and next director of the Allen School Magda Balazinska noted when she presented the award to Ruth, “In a program full of remarkable students, Kim stands out.”

Thanks to the Simonyi’s for supporting diversity and excellence, and thanks to everyone who came out to celebrate the people who are making our school and our field a more welcoming destination for all. And congratulations to Kim!

For more about our efforts to advance diversity in computing, check out the Allen School’s inclusiveness statement here.

October 11, 2019

Allen School researchers find racial bias built into hate-speech detection


Top left to right: Maarten, Gabriel, Smith; bottom left to right: Card, Choi

The volume of content posted on Facebook, YouTube, Twitter and other social media platforms every moment of the day, from all over the world, is monumental. Unfortunately, some of it is biased, hate-filled language targeting members of minority groups and often prompting violent action against them. Because it is impossible for human moderators to keep up with the volume of content generated in real-time, platforms are turning to artificial intelligence and machine learning to catch toxic language and stop it quickly. Regrettably, these toxic language finding tools have been found to suppress already marginalized voices. 

“Despite the benevolent intentions of most of these efforts, there’s actually a really big racial bias problem in hate speech detection right now,” said Maarten Sap, a Ph.D. student in the Allen School. “I’m not talking about the kind of bias you find in racist tweets or other forms of hate speech against minorities, instead the kind of bias I’m talking about is the kind that leads harmless tweets to be flagged as toxic when written by a minority population.”

In their paper, “The Risk of Hate Speech Detection,” presented at the recent Association for Computational Linguistics (ACL) meeting, Sap, fellow Ph.D. student Saadia Gabriel, professors Yejin Choi and Noah Smith of the Allen School and the Allen Institute for Artificial Intelligence, and Dallas Card at Carnegie Mellon University studied two different datasets of 124,779 tweets total  that were flagged for toxic language by a machine learning tool used by Twitter. What they found was widespread evidence of racial bias in how the tool characterized content. One of the datasets showed that the tool processing the tweets mistakenly reported 46% of non-offensive tweets written in African American English (AAE)–commonly spoken by black people in the US–as offensive, versus nine percent in general American English. The other dataset reported 26% of tweets in AAE as offensive when they were not, versus five percent of the general American English.

“I wasn’t aware of the exact level of bias in Perspective API–the tool used to detect online hate speech–when searching for toxic language, but I expected to see some level of bias from previous work that examined how easily algorithms like AI chatter bots learn negative cultural stereotypes and associations,” said Gabriel. “Still, it’s always surprising and a little alarming to see how well these algorithms pick up on toxic patterns pertaining to race and gender when presented with large corpora of unfiltered data from the web.”

This matters because ignoring the social context of the language, Sap said, harms minority populations by suppressing inoffensive speech. To address the biases displayed by the tool, the group changed the annotation, or the rules of reporting the hate speech. As an experiment, the researchers took 350 AAE tweets, and enlisted Amazon Mechanical Turkers for their help. 

Gabriel explained that on Amazon Mechanical Turk, researchers can set up tasks for workers to help with something like a research project or marketing effort. There are usually instructions and a set of criteria for the workers to consider, then a number of questions. 

“Here, you can tell workers specifically if there are particular things you want them to consider when thinking about the questions, for instance the tweet source,” she said. “Once the task goes up, anyone who is registered as a worker on Amazon Mechanical Turk can answer these questions. However, you can add qualifications to restrict the workers. We specified that all workers had to originate from the US since we’re considering US cultural norms and stereotypes.”

When given the tweets without background information, the Turkers reported that 55 percent of the tweets were offensive. When given the dialect and race of the tweeters, the Turkers reported that 44 percent of the tweets were offensive. The Turkers were also asked if they found the tweets personally offensive; only 33% of the posts were reported as such. This showed the researchers that priming the annotators with the source’s race and dialect influenced the labels, and also revealed that the annotations are nonobjective. 

“Our work serves as a reminder that hate speech and toxic language is highly subjective and contextual,” said Sap. “We have to think about dialect, slang and in-group versus out-group, and we have to consider that slurs spoken by the out-group might actually be reclaimed language when spoken by the in-group.”

While the study is concerning, Gabriel believes language processing machines can be taught to look at the source in order to prevent racial biases that result in the mischaracterization of content as hate speech that could lead to already marginalized voices being deplatformed.

“It’s not that these language processing machines are inventing biases, they’re learning them from the particular beliefs and norms we spread online. I think that in the same way that being more informed and having a more empathic view about differences between peoples can help us better understand our own biases and prevent them from having negative effects on those around us, injecting these kind of deeper insights into machine learning algorithms can have a significant difference on preventing racial bias,” she said. “For this, it is important to include more nuanced perspectives and greater context when doing natural language processing tasks like toxic language detection. We need to account for in-group norms and the deep complexities of our culture and history.”

To learn more, read the research paper here and watch a video of Sap’s ACL presentation here. Also see previous coverage of the project by Vox, Forbes,TechCrunch, NewScientist and Fortune.

October 9, 2019

Allen School’s 2019-2020 Distinguished Lecture Series will explore leading-edge innovation and real-world impact

Top left to right: Dean, Patterson, Spelke; bottom left to right: Howard, McKeon, Pereira

Mark your calendars! Another exciting season of the Allen School’s Distinguished Lecture Series kicks off on Oct. 10. During the 2019-2020 season, we will explore deep learning, domain-specific architectures, recent advances in artificial intelligence and robotics, and so much more. All lectures take place at 3:30 p.m. in the Amazon Auditorium on the ground floor of the Bill & Melinda Gates Center on the University of Washington’s Seattle campus. In addition, each lecture will be live streamed on the Allen School’s YouTube channel. 

Oct. 10: Jeff Dean, Google Senior Fellow and Senior Vice President for Google AI

Allen School alumnus Jeff Dean (Ph.D., ‘96) returns to his alma mater on Thursday, Oct. 10 to deliver a talk on “Deep Learning to Solve Challenging Problems.” Dean’s presentation will highlight recent accomplishments by Google research teams, such as the open-source TensorFlow system to rapidly train, evaluate and deploy machine learning systems, and how they relate to the National Academy of Engineering’s Grand Challenges for Engineering in the 21st Century. He will also explore how machine learning is transforming many aspects of today’s computing hardware and software systems. 

Dean, who joined Google in 1999, currently leads teams working on systems for speech recognition, computer vision, language understanding and various other machine learning tasks. During his two decades with the company, he co-designed and implemented many of Google’s most important and visible features, including multiple generations of its crawling, indexing and query serving systems as well as pieces of Google’s initial advertising and AdSense for content systems. He also helped create Google’s distributed computing infrastructure, including MapReduce, BigTable and Spanner. 

Oct. 29: David Patterson, Professor Emeritus, University of California, Berkeley; Distinguished Engineer, Google; and Vice Chair, Reduced Instruction Set Computer (RISC) Foundation

David Patterson will deliver a talk on Oct. 29 examining “Domain Specific Architectures (DSA) for Deep Neural Networks: Three Generations of Tensor Processing Units (TPUs).” His presentation will explore how the recent success of deep neural networks has inspired a resurgence in domain specific architectures to run them, partially as a result of the declaration of microprocessor performance improvement due to the ending of Moore’s Law. His talk will review Google’s first generation Tensor Processing Unit (TPUv1) and how the company built the first production DSA supercomputer for the much harder problem of training, which was deployed in 2017. 

Patterson’s work on RISC, Redundant Array of Inexpensive Disks (RAID), and Network of Workstation projects helped lead to multibillion-dollar industries. In 2017, he and RISC collaborator John Hennessy shared the Association for Computing Machinery’s A.M. Turing Award — the “Nobel Prize of computing” — for pioneering a systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry.

Nov. 14: Elizabeth Spelke, Marshall L. Berkman Professor of Psychology at Harvard University and Investigator, NSF-MIT Center for Brains, Minds and Machines

Elizabeth Spelke will deliver a lecture on Nov. 14 titled, “From Core Concepts to New Systems Knowledge.” Her lecture will center on cognitive systems in young children and the ability of the human species to gain knowledge not only through gradual learning but also through a fast and flexible learning process that appears to be unique to humans and emerges with the onset of language. Although this phase of life isn’t fully understood, Spelke is using research in psychology, neuroscience and artificial intelligence to better understand human cognitive function.

Spelke explores the sources of uniquely human cognitive capacities, including the capacity for formal mathematics, for constructing and using symbolic representations, and for developing comprehensive taxonomies of objects. Conducting behavioral research on infants and preschool children, Spelke studies the origins and development of human cognition by examining how humans develop their understanding of objects, actions, people, places, numbers and geometry. She also works with computational cognitive scientists to test computational models of infants’ cognitive capacities, and to extend her research into the field with the ultimate goal to enhance young children’s learning.

Dec. 5: Ayanna Howard, Linda J. and Mark C. Smith Professor and Chair, School of Interactive Computing at the Georgia Institute of Technology

Ayanna Howard will deliver a presentation on Dec. 5 titled “Roving for a Better World.” Her talk will focus on the role of computer scientists as responsible global citizens. She will delve into the implications of recent advances in robotics and artificial intelligence, and explain the critical importance of ensuring diversity and inclusion at all stages to reduce the risk of unconscious bias and ensuring robots are designed to be accessible to all.

Prior to joining Georgia Tech, Howard was a senior robotics researcher and deputy manager in the Office of the Chief Scientist at NASA’s Jet Propulsion Laboratory. She was first hired by NASA at the age of 27 to lead a team designing a robot for future Mars exploration missions that could “think like a human and adapt to change.” While at Georgia Tech, she has served as associate director of research for the Institute for Robotics and Intelligent Machines and chair of the robotics Ph.D. program. Business Insider named her one of the most powerful women engineers in the world in 2015 and in 2018 she was named in Forbes’ Top 50 Women in Tech. 

Jan. 16: Kathleen McKeown, Henry and Gertrude Rothschild Professor of Computer Science and Founding Director, Data Science Institute, Columbia University

Kathleen McKeown will deliver a talk on Jan.16; more details about her presentation will be posted soon on our Distinguished Lecture Series page.

McKeown’s research is in natural language processing, summarization, natural language generation and analysis of social media. In these areas, her work focuses on text summarization and generating updates on disasters over live, streaming information, generating messages about electricity usage and using reinforcement learning over usage logs to determine what kinds of messages can change behavior and the analysis of social media to detect messages about aggression and loss. While at Columbia, McKeown has served as the director of the Data Science Institute, was department chair from 1998-2003 and was the vice dean for research for the School of Engineering and Applied Science for two years. McKeown is also active internationally, having served as president, vice president and secretary-treasurer of the Association of Computational Linguistics as well as a board member and secretary of the board for the Computing Research Association. 

Feb. 27: Fernando Pereira, Vice President and Engineering Fellow, Google

Fernando Pereira will deliver the Allen School’s 2020 Taskar Memorial Lecture on Feb. 27. More details about his presentation will be posted soon on our Distinguished Lecture Series page.

Pereira leads research and development at Google in natural language understanding and machine learning. Previously, he was chair of the computer and information science department at the University of Pennsylvania, head of the machine learning and information retrieval department at AT&T Labs, and held research and management positions at Scientific Research Institute International. Pereira has produced more than 120 research publications on computational linguistics, machine learning, bioinformatics, speech recognition and logic programming. He holds several patents and is widely recognized for his contributions to sequence modeling, finite-state methods, and dependency and deductive parsing.

Be sure to check our Distinguished Lecture Series page for updates throughout the season, and please plan to join us!

October 2, 2019

Peak performance!

Dan and Galen Weld at the summit of Buck Mountain, with Glacier Peak in the background

On Saturday, Allen School Ph.D. student Galen Weld, his twin brother Adam, his father (and Allen School professor) Dan, and his mom Margaret Rosenfeld reached the 8,528-foot summit of Buck Mountain in the Glacier Peak Wilderness. With that achievement, Galen became the youngest person to summit each of the 100 highest peaks in Washington, and Dan and Galen became the first father-son team to achieve this milestone. (Dan completed his summit of Washington’s “Top 100” in 2016.)

Galen pops the cork on Champagne that Dan brought along to celebrate the achievement

September 23, 2019

Hao Peng wins 2019 Google Ph.D. Fellowship

Hao Peng

Hao Peng, a Ph.D. student working with Allen School professor Noah Smith, has been named a 2019 Google Ph.D. Fellow for his research in natural language processing (NLP). His research focuses on a wide variety of problems in NLP, including representation learning and structured prediction.

Peng, who is one of 54 students throughout the world to be selected for a Fellowship, aims to analyze and understand the working and decisions of deep learning models and to incorporate inductive bias into the models’ design, facilitating better learning algorithms. His research provides a better understanding of many state-of-the-art models in NLP and offers more principled ways to insert inductive bias into representation learning algorithms, making them both more computation-efficient and less data-hungry. 

“Hao has been contributing groundbreaking work in natural language processing that combines the strengths of so-called ‘deep’ representation learning with structured prediction,” said Smith. “He is creative, sets a very high bar for his work, and is a delightful collaborator.”

One of Peng’s groundbreaking contributions was his research in semi-parametric models for natural language generation and text summarization. He focused on the project last summer during an internship with the Language team at Google AI. Based on his work, Peng was asked to continue his internship part-time until the team published its findings at North American Chapter of the Association for Computational Linguistics 2019.

Another important piece of research that Peng published was “Backpropagating through Structured Argmax using a SPIGOT,” which earned a Best Paper Honorable Mention at the annual meeting of the Association for Computational Linguistics in 2018. The paper proposes structured projection of intermediate gradient optimization technique (SPIGOT), which facilitates end-to-end training of neural models with structured prediction as intermediate layers. Experiments show that SPIGOT outperforms the pipelined and non-structured baselines, providing evidence that structured bias may help to learn better NLP models. Due to its flexibility, SPIGOT is applicable to multitask learning with partial or full supervision of the intermediate tasks, or inducing latent intermediate structures, according to Peng. His collaborators on the project include Smith and Sam Thomson, formerly a Ph.D. student at Carnegie Mellon University, now at Semantic Machines.

Peng has co-authored a total of eight papers in the last three years while pursuing his Ph.D. at the Allen School. The Google Fellowship will help him extend his results providing fresh theoretical justifications and better understanding of many state-of-the-art NLP models, which could yield more principled techniques to bake inductive bias into representation learning algorithms.

Peng has worked as a research intern at Google New York and Google Seattle during his studies at the UW. Prior to that he worked as an intern at Microsoft Research Asia in Beijing, and at the University of Edinburgh.

Since 2009, the Google Ph.D. Fellowship program has recognized and supported exceptional graduate students working in core and emerging areas of computer science. Previous Allen School recipients include Joseph Redmon (2018), Tianqi Chen and Arvind Satyanarayan (2016), Aaron Parks and Kyle Rector (2015) and Robert Gens and Vincent Liu (2014). Learn more about the 2019 Google Fellowships here

Congratulations, Hao!

September 17, 2019

“Geek of the Week” Justin Chan is using smartphones to democratize medical diagnostics

Allen School Ph.D. student Justin Chan is on a mission to put the power of medical diagnostics into people’s hands, inspired by the ubiquity of smartphones coupled with advancements in artificial intelligence. Working with collaborators in the Networks & Mobile Systems Lab led by professor Shyam Gollakota and in UW Medicine, Chan has developed a mobile system for detecting ear infections in children and a contactless AI system for detecting cardiac arrest using smart devices. He co-founded Edus Health, a University of Washington spin-out that is pursuing clearance from the U.S. Food & Drug Administration to move his research out of the lab and into people’s hands and homes. His efforts have earned coverage by Scientific American, NPR, MIT Tech Review, and more — and, most recently, a feature as GeekWire’s Geek of the Week.

The motivation for Chan’s research stems from his recognition that while smart devices can combine multiple, sophisticated sensors in a battery-powered device no bigger than a pocket, the medical industry often still relies on expensive — and large — specialized devices for diagnosing patients. “I believe that everyone should be able to own their medical data. To that end, my goal is to make medical diagnostics frugal and accessible enough that anyone with a few spare parts and DIY-know-how would be able to obtain clinical-grade accuracies in the comfort of their homes,” Chan told GeekWire. “While the reality is that many diagnostic tools in healthcare often require expensive tools and specialist expertise, I am hoping we will be able to change that.”

Chan further explored the potential for smartphones and AI to transform health care in a recent article he co-authored with Drs. Sharat Raju of UW Medicine and Eric Topol of Scripps Research that appeared in The Lancet. In that article, the authors highlighted multiple examples of research aimed at using these technologies to diagnose a range of pediatric conditions in a variety of settings.

Read the full GeekWire profile here, and check out The Lancet article here.

Way to go, Justin!

September 17, 2019

Allen School’s Samia Ibtasam receives Google Women Techmakers Scholarship

Woman in Google sweatshirt with foggy Golden Gate Bridge in background

Samia Ibtasam, a third-year Ph.D. student in the Allen School’s Information & Communication Technology for Development (ICTD) Lab, was recognized recently by Google with a Women Techmakers Scholarship. The Women Techmakers Scholars Program — formerly known as the Anita Borg Memorial Scholarship Program — aims to advance gender equality in computing by encouraging women to become active role models and leaders in the field. Ibtasam, who works with Allen School professor Richard Anderson to increase financial and technological inclusion in low-resource communities, is one of only 20 women at universities across North America to receive a 2019 scholarship.

Ibtasam’s focus on research addressing issues in developing communities and her commitment to increasing diversity in computing were both shaped by her experiences in her native Pakistan. Before her arrival at the University of Washington in 2016, Ibtasam was the founding co-director of the Innovations for Poverty Alleviation Lab (IPAL) at the Information Technology University in Lahore. During her time at IPAL, Ibtasam focused on developing diagnostic applications and information systems to support maternal, neonatal, and child health in a country which has one of the highest infant mortality rates in the world. 

Working with the government of the Pakistani province of Punjab and the United Kingdom’s Department for International Development (DFID), Ibtasam launched Har Zindagi — Urdu for “every life,” inspired by the slogan “every life matters.” The goal of the program was to revamp the child immunization system and introduce digital health records for its citizens. One of Ibtasam’s priorities was to make the immunization card machine-readable to enable public health administrators to easily and effectively monitor child vaccinations, and to educate parents, including many with low literacy, about the recommended immunization schedule. Ibtasam also led the development of an Android-based mobile app to assist vaccinators in entering information in the field, as well as a web dashboard to track immunization coverage and retention data for policymakers. Along the way, she had to contend with a number of challenges, including lack of wireless coverage in rural areas, field workers unfamiliar with smartphone technology, family displacement, and the need to coordinate among diverse stakeholders, from the World Health Organization and UNICEF, to local government officials and app developers.

For the four years she was at ITU, Ibtasam was the only woman on the computer science faculty. She is also the first Pakistani woman to formally specialize in the field of ICTD. These early-career experiences have reinforced her belief in the need to cultivate more women as leaders and role models through programs like the Women Techmakers Scholarship.

“One thing that I did and am still continuing to work on is inspiring other young women to be technical and confident. As the only female faculty member in ITU’s CS department, I saw the need and value of being a role model and source of support for young women,” Ibtasam said. “Throughout my career, I have been in many work environments — research labs, conferences, meetings, and panels — where I was the only woman. It took courage and effort to be part of them, but it also made me resilient to stay, speak up, and represent. And while at times, there were male mentors who provided advice and support, many times doors were closed to me. So, I want to hold doors open for other women through my inspiration, my mentorship, my research, and my designed technologies.”

Ibtasam has carried this theme into her work at the Allen School, where she focuses on projects that explore the availability and adoption of digital financial services by people in developing and emerging markets — an issue that impacts more than two billion people worldwide who remain unbanked. The issue also has profound implications for women’s empowerment in societies where cultural, religious, and even legal frameworks may hinder their financial independence. 

For example, Ibtasam was the lead author of a paper exploring women’s use of mobile money in Pakistan that appeared at the 1st ACM SIGCAS Conference on Computing and Sustainable Societies (COMPASS ‘18) organized by the Association for Computing Machinery’s Special Interest Group on Computers & Society. Working alongside colleagues in the ICTD Lab, the Information Technology University in Lahore, and the Georgia Institute of Technology, Ibtasam examined how gendered barriers in resource-constrained communities affect opportunities for women to improve their circumstances through the use of digital financial services. With information gleaned from more than 50 semi-structured interviews with Pakistani women and men along with data from the Financial Inclusion Insights Survey, the team identified the socioeconomic, religious, and gendered dynamics that influence the ability of women in the country to access financial services. Based on these findings, the team proposed a set of recommendations for expanding the benefits of digital financial services to more women.

The previous year, Ibtasam and co-authors presented a paper at the ACM’s 9th International Conference on Information and Communication Technologies and Development (ICTD ‘17) that evaluated how learnability and other factors beyond access can help or hinder widespread adoption of smartphone-based mobile money applications in Pakistan. In addition, Ibtasam has explored the role of mobile money in improving financial inclusion in southern Ghana, which also appeared at COMPASS ‘18, and teamed up with members of the Allen School’s Security and Privacy Research Lab to investigate the computer security and privacy needs and practices of refugees who had recently resettled in the United States. The latter appeared at the 2018 IEEE Symposium on Security and Privacy (S&P ’18) co-sponsored by the IEEE Computer Society’s Technical Committee on Security and Privacy and the International Association for Cryptologic Research.

Ibtasam and her fellow Google scholars were invited to a retreat at the company’s headquarters in Mountain View, California in June, where they had the opportunity to engage in a variety of professional development and networking activities. Previous Allen School recipients of the Google scholarship include Ph.D. alumni Kira Goldner (2016), Eleanor O’Rourke and Irene Zhang (2015), Jennifer Abrahamson and Nicola Dell (2012), Janara Christensen and Kateryna Kuksenok (2011), and Lydia Chilton and Kristi Morton (2010).

Congratulations, Samia!

September 13, 2019

Allen School hosts Mathematics of Machine Learning Summer School

Man in glasses pointing at white board.
Kevin Jamieson at the Mathematics of Machine Learning Summer School.

In late July and early August, the Allen School was pleased to host a very successful MSRI Mathematics of Machine Learning Summer School. Around 40 Ph.D. students from across the country — and beyond — were brought to the University of Washington for two weeks of lectures by experts in the field, as well as problem sessions and social events. The lectures were also opened up to the broader UW community and recorded for future use.

Learning theory is a rich field at the intersection of statistics, probability, computer science, and optimization. Over the last few decades the statistical learning approach has been successfully applied to many problems of great interest, such as bioinformatics, computer vision, speech processing, robotics, and information retrieval. These impressive successes relied crucially on the mathematical foundation of statistical learning.

Recently, deep neural networks have demonstrated stunning empirical results across many applications like vision, natural language processing, and reinforcement learning. The field is now booming with new mathematical problems, and in particular, the challenge of providing theoretical foundations for deep learning techniques is still largely open. On the other hand, learning theory already has a rich history, with many beautiful connections to various areas of mathematics such as probability theory, high dimensional geometry, and game theory. The purpose of the summer school was to introduce graduate students and advanced undergraduates to these foundational results, as well as to expose them to the new and exciting modern challenges that arise in deep learning and reinforcement learning.

Woman pointing at white board.
Emma Brunskill

Participants explored a variety of topics with the guidance of lecturers Joan Bruna, a professor at New York University (deep learning); Stanford University professor Emma Brunskill (reinforcement learning); Sébastien Bubeck, senior researcher at Microsoft Research (convex optimization); Allen School professor Kevin Jamieson (bandits); and Robert Schapire, principal researcher at Microsoft Research (statistical learning theory).

The summer school was made possible by support from the Mathematical Sciences Research Institute, Microsoft Research, and the Allen School, with the cooperation of the Algorithmic Foundations of Data Science Institute at the University of Washington. The program was organized by Bubeck and Adith Swaminathan of Microsoft Research and Allen School professor Anna Karlin.

Learn more by visiting the summer school website here, and check out the video playlist here!

August 23, 2019

Allen School releases MuSHR robotic race car platform to drive advances in AI research and education

Close-up shot of blue MuSHR robotic race car with #24 and Husky graphic painted in white
An example of the MuSHR robotic race car, which people can build from scratch for research or educational purposes — or just for fun. Mark Stone/University of Washington

The race is on to cultivate new capabilities and talent in robotics and artificial intelligence, but the high barrier to entry for many research platforms means many would-be innovators could be left behind. Now, thanks to a team of researchers in the Personal Robotics Laboratory at the University of Washington’s Paul G. Allen School of Computer Science & Engineering, expert and aspiring roboticists alike can build a fully functional, robotic race car with advanced sensing and computational capabilities at a fraction of the cost of existing platforms. MuSHR — short for Multi-agent System for non-Holonomic Racing — provides researchers, educators, students, hobbyists, and hackers a vehicle to advance the state of the art, learn about robotic principles, and experiment with capabilities that are normally reserved for high-end laboratories.

“There are many tough, open challenges in robotics, such as those surrounding autonomous vehicles, for which MuSHR could be the ideal test platform,” said lab director Siddhartha Srinivasa, a professor in the Allen School. “But beyond research, we also need to think about how we prepare the next generation of scientists and engineers for an AI-driven future. MuSHR can help us answer that challenge, as well, by lowering the barrier for exploration and innovation in the classroom as well as the lab.”

MuSHR is an open-source, full-stack system for building a robotic race car using off-the-shelf and 3D-printed parts. The website contains all of the information needed to assemble a MuSHR vehicle from scratch, including a detailed list of materials, ready-to-use files for 3D-printing portions of the race-car frame, step-by-step assembly instructions and video tutorials, and the software to make the car operational. 

More complex, research-oriented robot navigation systems, such as the Georgia Tech AutoRally, cost upwards of $10,000. The more budget-friendly MIT RACECAR, the original do-it-yourself platform that inspired the UW team’s project, costs $2,600 with basic sensing capabilities. By contrast, a low-end MuSHR race car with no sensing can be assembled for as little as $610, and a high-end car equipped with a multitude of sensors can be built for around $930. 

“We were able to achieve a dramatic cost reduction by using low-cost components that, when tested under MuSHR’s use cases, do not have a significant impact on system performance compared to other, more costly systems,” noted Allen School Ph.D. student Patrick Lancaster. But don’t be fooled by the relatively modest price; what’s under the hood will appeal to even seasoned researchers, as the high-end setup may be used to support even advanced research projects.

“Most robotic systems available today fall into one of two categories: either simple, educational platforms that are low-cost but also low-functionality, or more robust, research-oriented platforms that are prohibitively expensive and complex to use,” explained Allen School master’s student Johan Michalove, one of the developers of the project. “MuSHR combines the best of both worlds, offering high functionality at an affordable price. It is comprehensive enough for professional research, but also accessible enough for classroom or individual use.”

mushr.io logo and link

MuSHR comes with a set of computational modules that enable intelligent decision making, including a software and sensor package that enables localization in a known environment, and a controller for navigating towards a goal while avoiding collisions with other objects in its path. Users can build upon these preliminary AI capabilities to develop more advanced perception and control algorithms. The name “MuSHR” was inspired by the UW’s mascot, the Huskies, because it calls to mind the mushing of a dogsled at top speed. As it happens, a MuSHR race car would outpace the dogs, achieving speeds in excess of 15 miles per hour — useful for researchers working to solve critical challenges in autonomous vehicle systems, such as high-speed collision avoidance, to make them viable for real-world use.

For students with a similar need for speed, MuSHR offers an opportunity to learn about the challenges of autonomous systems in a hands-on way. Allen School Ph.D. student and MuSHR project lead Matthew Schmittle worked with students to put the platform through its paces in the school’s Mobile Robotics course this past spring, in which 41 undergraduates used MuSHR to build a complete mobile robotic system while grappling with some of the same challenges as more seasoned researchers at the likes of Tesla, Uber, and Google.

“MuSHR introduces students to key concepts such as localization, planning, and control,” Schmittle, who served as a teaching assistant for the course, explained. “The course culminated in a demo day, where students had a chance to show how well they implemented these concepts by using their MuSHR cars to navigate a wayfinding course. We hope classrooms around the country will take advantage of MuSHR to engage more students to the exciting world of robotics and AI.”

So far, the MuSHR platform has featured in two undergraduate and two graduate-level courses at the UW. Nearly 130 students have had the opportunity to explore robotic principles using MuSHR, and the Allen School plans to offer similar classes in the future.

A group of robotic race cars in different colors against a plain white background
A fleet of MuSHR cars used by students in the Allen School’s Mobile Robotics course earlier this year. Mark Stone/University of Washington

“As a roboticist, I find MuSHR’s platform for rapid testing and refinement to be an enticing prospect for future research,” Srinivasa said. “As an educator, I am excited to see how access to MuSHR will spark students’ and teachers’ imaginations. And let’s not forget the hobbyists and hackers out there. We’re looking forward to seeing the various ways people use MuSHR and what new capabilities emerge as a result.” 

Individuals who are interested in building their own MuSHR robotic race car can download the code and schematics, and view tutorials on the MuSHR website. Educators who are interested in incorporating the MuSHR platform into their classes can email the team at mushr@cs.washington.edu to request access to teaching materials, including assignments and solutions. The team will also provide a small number of pre-built MuSHR cars to interested parties who would otherwise not have the financial means to build the cars themselves, with priority given to schools, groups, and labs whose members are underrepresented in robotics. Interested individuals or labs are encouraged to submit an application by filling out this form.

MuSHR is a work in progress, and the version released today is the lab’s third iteration of the platform. The team plans to continue refining MuSHR while employing it in a variety of research projects at the UW. In addition to Srinivasa, Michalove, Schmittle, and Lancaster, members of the MuSHR team include professor Joshua R. Smith, who holds a joint appointment in the Allen School and Department of Electrical & Computer Engineering; fifth-year master’s students Matthew Rockett and Colin Summers; postdocs Sanjiban Choudhury, Christoforos Mavrogiannis, and Allen School Ph.D. alumna and current postdoc Fereshteh Sadeghi

The MuSHR project was undertaken with support from the Allen School; the Honda Research Institute; and Intel, who contributed the Realsense on-board cameras used in the construction of the vehicles.

Read coverage of MuSHR in GeekWire here.

August 21, 2019

Allen School’s latest faculty additions will strengthen UW’s leadership in robotics, machine learning, human-computer interaction, and more

The Allen School is preparing to welcome five faculty hires in 2019-2020 who will enhance the University of Washington’s leadership at the forefront of computing innovation, including robot learning for real-world systems, machine learning and its implications for society, online discussion systems that empower users and communities, and more. Meet Byron Boots, Kevin Lin, Jamie Morgenstern, Alex Ratner, and Amy Zhang — the outstanding scholars set to expand the school’s research and teaching in exciting new directions while building on our commitment to excellence, mentorship, and service:

Byron Boots, machine learning and robotics

Former Allen School postdoc Byron Boots returns to the University of Washington this fall after spending five years on the faculty of Georgia Institute of Technology, where he leads the Georgia Tech Robot Learning Lab. Boots, who earned his Ph.D. in Machine Learning from Carnegie Mellon University, advances fundamental and applied research at the intersection of artificial intelligence, machine learning, and robotics. He focuses on the development of theory and systems that tightly integrate perception, learning, and control while addressing problems in computer vision, system identification, state estimation, localization and mapping, motion planning, manipulation, and more. Last year, Boots received a CAREER Award from the National Science Foundation for his efforts to combine the strengths of hand-crafted, physics-based models with machine learning algorithms to advance robot learning.

Boots’ goal is to make robot learning more efficient while producing results that are both interpretable and safe for use in real-world systems such as mobile manipulators and high-speed ground vehicles. His work draws upon and extends theory from nonparametric statistics, graphical models, neural networks, non-convex optimization, online learning, reinforcement learning, and optimal control. One of the areas in which Boots has made significant contributions is imitation learning (IL), a promising avenue for accelerating policy learning for sequential prediction and high-dimensional robotics control tasks more efficiently than conventional reinforcement learning (RL). For example, Boots and colleagues at Carnegie Mellon University introduced a novel approach to imitation learning, AggreVaTeD, that allows for the use of expressive differentiable policy representations such as deep networks while leveraging training-time oracles. Using this method, the team showed that it could achieve faster and more accurate solutions with less training data — formally demonstrating for the first time that IL is more effective than RL for sequential prediction with near-optimal oracles. 

Following up on this work, Boots and graduate student Ching-An Cheng produced new theoretical insights into value aggregation, a framework for solving IL problems that combines data aggregation and online learning. As a result of their analysis, the duo was able to debunk the commonly held belief that value aggregation always produces a convergent policy sequence while identifying a critical stability condition for convergence. Their work earned a Best Paper Award at the 21st International Conference on Artificial Intelligence and Statistics (AISTATS 2018)

Driven in part by his work with highly dynamic robots that engage in complex interactions with their environment, Boots has also advanced the state of the art in robotic control.  Boots and colleagues recently developed a learning-based framework, Dynamic Mirror Descent Model Predictive Control (DMD-MPC), that represents a new design paradigm for model predictive control (MPC) algorithms. Leveraging online learning to enable customized MPC algorithms for control tasks, the team demonstrated a set of new MPC algorithms created with its framework on a range of simulated tasks and a real-world aggressive driving task. Their work won Best Student Paper and was selected as a finalist for Best Systems Paper at the Robotics: Science and Systems (RSS 2019) conference this past June.

Among Boots’ other recent contributions is the Gaussian Process Motion Planner algorithm, a novel approach to motion planning viewed as probabilistic inference that was designated “Paper of the Year” for 2018 by the International Journal of Robotics Research, and a robust, precise force prediction model for learning tactile perception developed in collaboration with NVIDIA’s Seattle Robotics Research Lab, where he teamed up with his former postdoc advisor, Allen School professor Dieter Fox.

Kevin Lin, computer science education

Kevin Lin arrives at the Allen School this fall as a lecturer specializing in introductory computer science education and broadening participation at the K-12 and post-secondary levels. Lin earned his master’s in Computer Science and bachelor’s in Computer Science and Cognitive Science from the University of California, Berkeley. Before joining the faculty at the Allen School, Lin spent four years as a teaching assistant (TA) at Berkeley, where he taught courses in introductory programming, data structures, and computer architecture. As a head TA for all three of these courses, he coordinated roughly 40 to 100 TAs per semester and focused on improving the learning experience both inside and outside the classroom.

Lin’s experience managing courses with over 1,400 enrolled students spurred his interest in using innovative practices and software tools to improve student engagement and outcomes and to attract and retain more students in the field. While teaching introductory programming at Berkeley, Lin helped introduce and expand small-group mentoring sections led by peer students who previously took the course. These sections offered a safe environment in which students could ask questions and make mistakes outside of the larger lecture. He also developed an optimization algorithm for assigning students to these sessions, with an eye toward improving not only their academic performance but also their sense of belonging in the major. As president of Computer Science Mentors, Lin grew the organization from 100 mentors to more than 200 and introduced new opportunities for mentor training and professional development.

In addition to supporting peer-to-peer mentoring, Lin uses software to effectively share feedback with students as a course progresses. For example, he helped build a system that automatically prompts students to interactively explore at predetermined intervals the automated checks which will be applied to their solution. By giving students an opportunity to compare their own expectations for a program’s behavior against the real output, this approach builds students’ confidence in their work while reducing their need to ask for clarifications on an assignment. He also developed a system for sharing individualized, same-day feedback with students about in-class quiz results to encourage them to revisit and learn from their mistakes. Lin also has taken a research-based approach to identifying the most effective interventions that instructors can use to improve student performance. 

Outside of the classroom, Lin has led efforts to expand K-12 computer-science education through pre-service teacher training and the incorporation of CS principles in a variety of subjects. He has also been deeply engaged in efforts to improve undergraduate education as a coordinator of Berkeley’s EECS Undergraduate Teaching Task Force.

Jamie Morgenstern, machine learning and theory of computation

Jamie Morgenstern joins the Allen School faculty this fall from the Georgia Institute of Technology, where she is a professor in the School of Computer Science. Her research explores the social impact of machine learning and how social behavior influences decision-making systems. Her goal is to build mechanisms for ensuring fair and equitable application of machine learning in domains people interact with every day. Other areas of interest algorithmic game theory and its application to economics — specifically, how machine learning can be used to make standard economic models more realistic and relevant in the age of e-commerce. Previously, Morgenstern spent two years as a Warren Fellow of Computer Science and Economics at the University of Pennsylvania after earning her Ph.D. in Computer Science from Carnegie Mellon University. 

Machine learning has emerged as an indispensable tool for modern communication, financial services, medicine, law enforcement, and more. The application of ML in these system introduces incredible opportunities for more efficient use of resources. However, introducing these powerful computational methods in dynamic, human-centric environments may exacerbate inequalities present in society, as they may well have uneven performance for different demographic groups. Morgenstern terms this phenomenon “predictive inequity.” It manifests in a variety of online settings — from determining which discounts and services are offered, to deciding which news articles and ads people see. Predictive inequity may also affect people’s safety in the physical world, as evidenced by what Morgenstern and her colleagues found in an examination of object detection systems that interpret visual scenes to help prevent collisions involving vehicles, pedestrians, or elements of the environment. The team found that object detection’s performance varies when measured on pedestrians with differing skin types, independent of time of day or several other potentially mitigating factors. In fact, all of the models Morgenstern and her colleagues analyzed exhibited poorer performance in detecting people with darker skin types — defined as Fitzpatrick skin types between 4 and 6 — than pedestrians with lighter skin types. Their results suggest, if these discrepancies in performance are not addressed, certain demographic groups may face greater risk of fatality from object recognition failures.

Spurred on by these and other findings, Morgenstern aims to develop techniques for building predictive equity into machine learning models to ensure they treat members of different demographic groups with similarly high fidelity. For example, she and her colleagues developed a technique for increasing fairness in principal component analysis (PCA), a popular method of dimensionality reduction in the sciences that is intended to reduce bias in datasets but that, in practice, can inadvertently introducing new bias. In other work, Morgenstern and colleagues developed algorithms for incorporating fairness constraints in spectral clustering and also in data summarization to ensure equitable treatment of different demographic groups in unsupervised learning applications. 

Morgenstern also aims to understand how social behavior and self-interest can influence machine learning systems. To that end, she analyzes how systems can be manipulated and designs systems to be robust against such attempts at manipulation, with potential applications in lending, housing, and other domains that rely on predictive models to determine who receives a good. In one ongoing project, Morgenstern and her collaborators have explored techniques for using historical bidding data, where bidders may have attempted to game the system, to optimize revenue while incorporating incentive guarantees. Her work extends to making standard economic models more realistic and robust in the face of imperfect information. For example, Morgenstern is developing a method for designing near-optimal auctions from samples that achieve high welfare and high revenue while balancing supply and demand for each good. In a previous paper selected for a spotlight presentation at the 29th Conference on Neural Information Processing Systems (NeurIPS 2015), Morgenstern and Stanford University professor Tim Roughgarden borrowed techniques from statistical learning theory to present an approach for designing revenue-maximizing auctions that incorporates imperfect information about the distribution of bidders’ valuations while at the same time providing robust revenue guarantees.

Alex Ratner, machine learning, data management, and data science

Alexander Ratner will arrive at the University of Washington in fall 2020 from Stanford University, where he will have completed his Ph.D. working with Allen School alumnus Christopher Ré (Ph.D., ‘09). Ratner’s research focuses on building new high level systems for  machine learning based around “weak supervision,” enabling practitioners in a variety of domains to use less  precise, higher-level inputs to generate dynamic and noisy training sets on a massive scale. His work aims to overcome a key obstacle to the rapid development and deployment of state-of-the-art machine learning applications — namely, the need to manually label and manage training data — using a combination of algorithmic, theoretical, and systems-related techniques.

Ratner led the creation and development of Snorkel, an open-source system for building and managing training datasets with weak supervision that earned a “Best of” Award at the 44th International Conference on Very Large Databases (VLDB 2018). Snorkel is the first system of its kind that focuses on enabling users to build and manipulate training datasets programmatically, by writing labeling functions, instead of the conventional and painstaking process of labeling data by hand. The system automatically reweights and combines the noisy outputs of those labeling functions before using them to train the model — a process that enables users, including subject-matter experts such as scientists and clinicians, to build new machine learning applications in a matter of days or weeks, as opposed to months or years. Technology companies, including household names such as Google, Intel, and IBM, have deployed Snorkel for content classification and other tasks, while federal agencies such as the U.S. Department of Veterans Affairs (VA) and the Food and Drug Administration (FDA) have used it for medical informatics and device surveillance tasks. Snorkel is also deployed as part of the  Defense Advanced Research Projects Agency (DARPA) MEMEX program, an initiative aimed at advancing online search and content indexing capabilities to combat human trafficking; in a variety of medical informatics and triaging projects in collaboration with Stanford Medicine, including two that were presented in Nature Communications papers this year; and a range of other open source use cases  

Building on Snorkel’s success, Ratner and his collaborators subsequently released Snorkel MeTaL, now integrated into the latest Snorkel v0.9 release, to apply the principle of weak supervision to the training of multi-task learning models. Under the Snorkel MeTaL framework, practitioners write labeling functions for multiple, related tasks, and the system models and integrates various weak supervision sources to account for varied — and often, unknown — levels of granularity, correlation, and accuracy. The team approached this integration in the absence of labels as a matrix completion-style problem, introducing a simple yet scalable algorithm that leverages the dependency structure of the sources to recover unknown accuracies while exploiting the relationship structure between tasks. The team recently used Snorkel MeTaL to help achieve a state-of-the-art result on the popular SuperGLUE benchmark, demonstrating the effectiveness of programmatically building and managing training data for multi-task models as a new focal point for machine learning developers.  

Another area of machine learning in which Ratner has developed new systems, algorithms, and techniques is data augmentation, a technique for artificially expanding or reshaping labeled training datasets using task-specific data transformations in which class labels are preserved. Data augmentation is an essential tool for training modern machine learning models, such as deep neural networks, that require massive labeled datasets. As a manual task, it requires time-intensive manual tuning to achieve the compositions required for high-quality results, while a purely automated approach tends to produce wide variations in end performance. Ratner co-led the development of a novel method for data augmentation that leverages user domain knowledge combined with automation to achieve more accurate results in less time. Ratner’s approach enables subject-matter experts to specify black-box functions that transform data points, called transformation functions (TFs), then applies a generative sequence model over the specified TFs to produce realistic and diverse datasets. The team demonstrated the utility of its approach by training its model to rotate, rescale, and move tumors in ultrasound images — a result that led to real-world improvements in mammography screening and other tasks.

Amy Zhang, human-computer interaction and social computing

Amy Zhang will join the Allen School faculty in fall 2020 after completing her Ph.D. in Computer Science at MIT. Zhang focuses on the design and development of online discussion systems and computational techniques that empower users and communities by giving them the means to control their online experiences and the flow of information. Her work, which blends deep qualitative needfinding with large-scale quantitative analysis towards the design and development of new user-facing systems, spans human-computer interaction, computational social science, natural language processing, machine learning, data mining, visualization, and end-user programming. Zhang recently completed a fellowship with the Berkman Klein Center for Internet & Society at Harvard University. She was previously named a Gates Scholar at the University of Cambridge, where she earned her master’s in Computer Science.

Online discussion systems have a profound impact on society by influencing the way we communicate, collaborate, and participate in public discourse. Zhang’s goal is to enable people to effectively manage and extract useful knowledge from large-scale discussions, and to provide them with tools for customizing their online social environments to suit their needs. For example, she worked with Justin Cranshaw of Microsoft Research to develop Tilda, a system for collaboratively tagging and summarizing group chats on platforms like Slack. Tilda — a play on the common expression “tl;dr,” or “too long; didn’t read” — makes it easier for participants to enrich and extract meaning from their conversations in real time as well as catch up on content they may have missed. The team’s work earned a Best Paper Award at the 21st ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2018)

The previous year at CSCW, Zhang and her MIT colleagues presented Wikum, a system for creating and aggregating summaries of large, threaded discussions that often contain a high degree of redundancy. The system, which draws its name from a portmanteau of “wiki” and “forum,” employs techniques from machine learning and visualization, as well as a new workflow Zhang developed called recursive summarization. This approach produces a summary tree that enables readers to explore a discussion and its subtopics at multiple levels of detail according to their interests. Wikipedia editors have used the Wikum prototype to resolve content disputes on the site. It was also selected by the Collective Intelligence for Democracy Conference in Madrid as a tool for engaging citizens on discussions of city-wide proposals in public forums such as Decide Madrid. 

Zhang has also tackled the dark side of online interaction, particularly email harassment. In a paper that appeared at last year’s ACM Conference on Human Factors in Computing Systems (CHI 2018), Zhang and her colleagues introduced a new tool, Squadbox, that enables recipients to coordinate with a supportive group of friends — their “squad” — to moderate the offending messages. Squadbox is a platform for supporting highly customized, collaborative workflows that allow squad members to intercept, reject, redirect, filter, and organize messages on the recipient’s behalf. This approach, for which Zhang coined the term “friendsourced moderation,” relieves the victims of the emotional and temporal burden of responding to repeated harassment. A founding member of the Credibility Coalition, Zhang has also contributed to efforts to develop transparent, interoperable standards for determining the credibility of news articles using a combination of article text, external sources, and article metadata. As a result of her work, Zhang was featured by the Poynter Institute as one of six academics on the “frontlines of fake news research.”

This latest group of educators and innovators join an impressive — and impressively long — list of recent additions to the Allen School faculty who have advanced UW’s reputation across the field. They include the addition of Tim Althoff, whose research applies data science to advance human health and well-being; Hannaneh Hajishirzi, an expert in natural language processing; René Just, whose research combines software engineering and security; Rachel Lin and Stefano Tessaro, who introduced exciting new expertise in cryptography; Ratul Mahajan, a prolific researcher in networking and cloud computing; Sewoong Oh, who is advancing the frontiers of theoretical machine learning; Hunter Schafer and Brett Wortzman, who support student success in core computer science education; and Adriana Schulz, who is focused on computational design for manufacturing to drive the next industrial revolution. Counting the latest arrivals, the Allen School has welcomed a total of 22 new faculty members in the past three years.

A warm Allen School welcome to Alex, Amy, Byron, Jamie, and Kevin!

August 16, 2019

Older Posts »