Skip to main content

Researchers create smart speaker that uses white noise to monitor sleeping infants

Breath Junior
UW researchers have developed a new smart speaker skill that lets a device use white noise to both soothe sleeping babies and monitor their breathing and movement. Credit: Dennis Wise/University of Washington

Doctors, parenting magazines and parents themselves recommend using white noise to help babies fall and stay asleep. Continuous, monotonous sounds like ocean waves, raindrops on a rooftop or the rumbling noise of an airplane can lull a newborn to sleep and help him or her rest longer. It also signals to little ones that it’s time to sleep when they hear the sound.

White noise—a mixture of different pitches and sounds—can soothe fussing and boost sleep in babies. And now it can be used to monitor their motion and respiratory patterns.

Researchers at the University of Washington have developed a new smart speaker, similar to the Amazon Echo or Google Home, that uses white noise to monitor infant breathing and movements. Doing so is vital because children under the age of one are susceptible to rare and devastating sleep anomalies such as Sudden Infant Death Syndrome (SIDS), according to Allen School professor Shyam Gollakota, and his Ph.D. student Anran Wang and Dr. Jacob Sunshine in the UW School of Medicine. Respiratory failure is believed to be the main cause of SIDS.

“One of the biggest challenges new parents face is making sure their babies get enough sleep. They also want to monitor their children while they’re sleeping. With this in mind, we sought to develop a system that combines soothing white noise with the ability to unobtrusively measure an infant’s motion and breathing,” said Sunshine, who is also an adjunct professor in the Allen School.

In their paper, “Contactless Infant Monitoring using White Noise,” which they will present on Oct. 22 at the MobiCom2019 conference in Los Cabos, Mexico, the team discusses how and why they created BreathJunior, a smart speaker that plays white noise and records how the noise is reflected back to detect breathing motions of infants’ chests.

“Smart speakers are becoming more and more prevalent, and these devices already have the ability to play white noise,” said Gollakota, who is also the director of the Networks & Mobile Systems Lab. “If we could use this white noise feature as a contactless way to monitor infants’ hand and leg movements, breathing and crying, then the smart speaker becomes a device that can do it all, which is really exciting.”

The team generated novel algorithms that could help them distill the tiny motion of an infant breathing from the white noise emitted from the speakers.

Breath Junior
With this smart speaker skill, the device plays white noise and records how the noise is reflected back to detect breathing motions of infants’ tiny chests. It can track both small motions — such as the chest movement involved in breathing — and large motions — such as babies moving around in their cribs. It can also pick up the sound of a baby crying. Credit: Dennis Wise/University of Washington

“We start out by transmitting a random white noise signal. But we are generating this random signal, so we know exactly what the randomness is,” said Wang.  “That signal goes out and reflects off the baby. Then the smart speaker’s microphones get a random signal back. Because we know the original signal, we can cancel out any randomness from that and then we’re left with only information about the motion from the baby.”

Because the breathing movement in babies is so minute, it’s hard to detect the movement of the baby’s chest, so Wang said they also scan the room to pinpoint where the baby is to maximize changes in the white noise signal.

“Our algorithm takes advantage of the fact that smart speakers have an array of microphones that can be used to focus in the direction of the infant’s chest,” he said. “It starts listening for changes in a bunch of potential directions, and then continues the search toward the direction that gives the clearest signal.”

The group used a prototype of BreatheJunior on an infant simulator, which could be set at different breathing rates. When they had success with the simulator, they tested it on five children in a local hospital’s neonatal intensive care unit and the respiratory rates closely matched the rates detected by standard vital signs monitors.

Sunshine explained that infants in the NICU are more likely to have either quite high or very slow breathing rates, which is why the NICU monitors their breathing so closely. BreatheJunior was able to accurately identify the breathing rates. The babies were also connected to hospital-grade respiratory monitors.

“BreathJunior holds potential for parents who want to use white noise to help their child sleep and who also want a way to monitor their child’s breathing and motion,” said Sunshine. “It also has appeal as a tool for monitoring breathing in the subset of infants in whom home respiratory monitoring is clinically indicated, as well as in hospital environments where doctors want to use unwired respiratory monitoring.”

Sunshine said it was very important to note that the American Academy of Pediatrics recommends not using a monitor that markets itself as reducing the risk of SIDS. The research he said, makes no such claim. It uses white noise to track breathing and monitor motion. It can also let parents know if the baby is crying.

The research was funded by the National Science Foundation. Learn more about the researcher’s work by visiting their website, Sound Life Sciences, Inc. Read more about the speaker system at UW News, the Daily Mail, GeekWire, MIT Technology Review and Digital Trends.


Read more →

Allen School celebrates diversity and inclusion

Grace Hopper attendees
Allen School representatives at the Grace Hopper Celebration of Women in Computing.

As a community committed to diversity and inclusion, the Allen School celebrates and values differences in its members. Yesterday (Oct. 10), the School held its annual diversity in computing reception, a favorite event highlighting the School’s broadening participation in organizations that honor diversity in computing.

Students, faculty and staff that attended the Grace Hopper Celebration of Women in Computing  earlier in October and the ACM Richard Tapia Celebration of Diversity in Computing in late September were recognized.

The Grace Hopper Celebration, held this year in Orlando, Florida, is the world’s largest gathering of women technologists and focuses on helping women grow, learn and develop to their highest potential.

Tapia conference attendees
Allen School attendees at the Richard Tapia Celebration of Diversity in Computing.

“I loved attending Grace Hopper this year. So many of the talks were so inspiring and gave me hands-on tools to approach challenges I face as a woman in tech. It was a big confidence booster, and I had the chance to meet so many amazing women in my field,” said Amanda Baughan, a graduate student in the Allen School. “It’s inspired me to tackle more difficult problems and reach for goals I may have second-guessed my own abilities in achieving previously.”

Aishwarya Mandyam
Jodi Tims, Chair of ACM-W; Allen School student Aishwarya Mandyam; Vidya Srinivasan and Sheila Tejada, co-chairs of the Grace Hopper Celebration.

Allen School student Aishwarya Mandyam was honored at the Hopper Celebration for her work, winning second place internationally in the ACM Student Research Competition.

The Tapia Celebration, held in San Diego this year, brings together people of all backgrounds, abilities and genders to recognize, celebrate and promote diversity in computing.

“I got to learn about thriving opportunities for a diverse workforce in tech and discovered it as a great platform for me to completely embrace distinct identities of myself–a woman of color, a first-gen college student, an immigrant with all transitioning struggles bolstered–and was able to find my own ground in this highly challenging field,” said Radia Karim, a junior in the Allen School. “I was really moved by the conference’s agenda and with the extremely bold and diverse Tapia attendees who have been consistently defining their own footprints in tech rising above all odds and making the future more welcoming.”

Recognizing those in attendance at these conferences, Ed Lazowska, the Bill & Melinda Gates Chair in the Allen School, said that the two conferences highlight the School’s core values and its commitment to diversity and inclusion.

“The Allen School has been widely recognized as a leader in promoting gender diversity in computing. In addition to our strides in our student body, I want to note that over the past 9 years, our faculty has grown by 29, and 15 of these are women – an amazing record for which Hank Levy deserves a great deal of credit,” he said. “In the past few years we’ve dramatically increased the attention we devote to underrepresented minority students and students from low-income backgrounds, at both the undergraduate and graduate levels.”

The Allen School has partnered with the College of Engineering’s STARS program and the state’s AccessCSforAll; students from both were also recognized.

Kimberly Ruth
Lisa Simonyi Prize recipient Kim Ruth and incoming Allen School Director Magda Balazinska.

During the reception, Kimberly Ruth, an Allen School senior, was awarded the Lisa Simonyi Prize. The prize was established by Lisa and Charles Simonyi, for students who exemplify the commitment to excellence, leadership, and diversity to which the School aspires. Ruth is an exceptionally talented and dedicated student. She is a member of UW’s Interdisciplinary Honors program and is a dual major in computer engineering and mathematics. Not only is she engaged in research with Allen School professors Franziska Roesner and Tadayoshi Kohno in the Security & Privacy Lab, but she has also been awarded the 2018 Goldwater Scholarship and the 2017 Washington Research Foundation Fellowship. She has served as a tutor for four years in a program that teaches math and Python programming to middle and high school students and founded Go Figure, an initiative to get middle school students excited about math. Last year, she was named a “Husky 100,” an annual program that honors UW students who are impacting the University community positively.

As professor and next director of the Allen School Magdalena Balazinska noted when she presented the award to Ruth, “In a program full of remarkable students, Kim stands out.”

Thanks to the Simonyi’s for supporting diversity and excellence, and thanks to everyone who came out to celebrate the people who are making our school and our field a more welcoming destination for all. And congratulations to Kim!

For more about our efforts to advance diversity in computing, check out the Allen School’s inclusiveness statement here.
Read more →

Allen School researchers find racial bias built into hate-speech detection


Top left to right: Sap, Gabriel, Smith; bottom left to right: Card, Choi

The volume of content posted on Facebook, YouTube, Twitter and other social media platforms every moment of the day, from all over the world, is monumental. Unfortunately, some of it is biased, hate-filled language targeting members of minority groups and often prompting violent action against them. Because it is impossible for human moderators to keep up with the volume of content generated in real-time, platforms are turning to artificial intelligence and machine learning to catch toxic language and stop it quickly. Regrettably, these toxic language finding tools have been found to suppress already marginalized voices. 

“Despite the benevolent intentions of most of these efforts, there’s actually a really big racial bias problem in hate speech detection right now,” said Maarten Sap, a Ph.D. student in the Allen School. “I’m not talking about the kind of bias you find in racist tweets or other forms of hate speech against minorities, instead the kind of bias I’m talking about is the kind that leads harmless tweets to be flagged as toxic when written by a minority population.”

In their paper, “The Risk of Hate Speech Detection,” presented at the recent Association for Computational Linguistics (ACL) meeting, Sap, fellow Ph.D. student Saadia Gabriel, professors Yejin Choi and Noah Smith of the Allen School and the Allen Institute for Artificial Intelligence, and Dallas Card at Carnegie Mellon University studied two different datasets of 124,779 tweets total  that were flagged for toxic language by a machine learning tool used by Twitter. What they found was widespread evidence of racial bias in how the tool characterized content. One of the datasets showed that the tool processing the tweets mistakenly reported 46% of non-offensive tweets written in African American English (AAE)–commonly spoken by black people in the US–as offensive, versus nine percent in general American English. The other dataset reported 26% of tweets in AAE as offensive when they were not, versus five percent of the general American English.

“I wasn’t aware of the exact level of bias in Perspective API — the tool used to detect online hate speech — when searching for toxic language, but I expected to see some level of bias from previous work that examined how easily algorithms like AI chatter bots learn negative cultural stereotypes and associations,” said Gabriel. “Still, it’s always surprising and a little alarming to see how well these algorithms pick up on toxic patterns pertaining to race and gender when presented with large corpora of unfiltered data from the web.”

This matters because ignoring the social context of the language, Sap said, harms minority populations by suppressing inoffensive speech. To address the biases displayed by the tool, the group changed the annotation, or the rules of reporting the hate speech. As an experiment, the researchers took 350 AAE tweets, and enlisted Amazon Mechanical Turkers for their help. 

Gabriel explained that on Amazon Mechanical Turk, researchers can set up tasks for workers to help with something like a research project or marketing effort. There are usually instructions and a set of criteria for the workers to consider, then a number of questions. 

“Here, you can tell workers specifically if there are particular things you want them to consider when thinking about the questions, for instance the tweet source,” she said. “Once the task goes up, anyone who is registered as a worker on Amazon Mechanical Turk can answer these questions. However, you can add qualifications to restrict the workers. We specified that all workers had to originate from the US since we’re considering US cultural norms and stereotypes.”

When given the tweets without background information, the Turkers reported that 55 percent of the tweets were offensive. When given the dialect and race of the tweeters, the Turkers reported that 44 percent of the tweets were offensive. The Turkers were also asked if they found the tweets personally offensive; only 33% of the posts were reported as such. This showed the researchers that priming the annotators with the source’s race and dialect influenced the labels, and also revealed that the annotations are nonobjective. 

“Our work serves as a reminder that hate speech and toxic language is highly subjective and contextual,” said Sap. “We have to think about dialect, slang and in-group versus out-group, and we have to consider that slurs spoken by the out-group might actually be reclaimed language when spoken by the in-group.”

While the study is concerning, Gabriel believes language processing machines can be taught to look at the source in order to prevent racial biases that result in the mischaracterization of content as hate speech that could lead to already marginalized voices being deplatformed.

“It’s not that these language processing machines are inventing biases, they’re learning them from the particular beliefs and norms we spread online. I think that in the same way that being more informed and having a more empathic view about differences between peoples can help us better understand our own biases and prevent them from having negative effects on those around us, injecting these kind of deeper insights into machine learning algorithms can have a significant difference on preventing racial bias,” she said. “For this, it is important to include more nuanced perspectives and greater context when doing natural language processing tasks like toxic language detection. We need to account for in-group norms and the deep complexities of our culture and history.”

To learn more, read the research paper here and watch a video of Sap’s ACL presentation here. Also see previous coverage of the project by Vox, Forbes, TechCrunch, New Scientist, Fortune, TechCrunch and MIT Technology Review.

Read more →

Allen School’s 2019-2020 Distinguished Lecture Series will explore leading-edge innovation and real-world impact

Top left to right: Dean, Patterson, Spelke; bottom left to right: Howard, McKeon, Pereira

Mark your calendars! Another exciting season of the Allen School’s Distinguished Lecture Series kicks off on Oct. 10. During the 2019-2020 season, we will explore deep learning, domain-specific architectures, recent advances in artificial intelligence and robotics, and so much more. All lectures take place at 3:30 p.m. in the Amazon Auditorium on the ground floor of the Bill & Melinda Gates Center on the University of Washington’s Seattle campus. In addition, each lecture will be live streamed on the Allen School’s YouTube channel. 

Oct. 10: Jeff Dean, Google Senior Fellow and Senior Vice President for Google AI

Allen School alumnus Jeff Dean (Ph.D., ‘96) returns to his alma mater on Thursday, Oct. 10 to deliver a talk on “Deep Learning to Solve Challenging Problems.” Dean’s presentation will highlight recent accomplishments by Google research teams, such as the open-source TensorFlow system to rapidly train, evaluate and deploy machine learning systems, and how they relate to the National Academy of Engineering’s Grand Challenges for Engineering in the 21st Century. He will also explore how machine learning is transforming many aspects of today’s computing hardware and software systems. 

Dean, who joined Google in 1999, currently leads teams working on systems for speech recognition, computer vision, language understanding and various other machine learning tasks. During his two decades with the company, he co-designed and implemented many of Google’s most important and visible features, including multiple generations of its crawling, indexing and query serving systems as well as pieces of Google’s initial advertising and AdSense for content systems. He also helped create Google’s distributed computing infrastructure, including MapReduce, BigTable and Spanner. 

Oct. 29: David Patterson, Professor Emeritus, University of California, Berkeley; Distinguished Engineer, Google; and Vice Chair, Reduced Instruction Set Computer (RISC) Foundation

David Patterson will deliver a talk on Oct. 29 examining “Domain Specific Architectures (DSA) for Deep Neural Networks: Three Generations of Tensor Processing Units (TPUs).” His presentation will explore how the recent success of deep neural networks has inspired a resurgence in domain specific architectures to run them, partially as a result of the declaration of microprocessor performance improvement due to the ending of Moore’s Law. His talk will review Google’s first generation Tensor Processing Unit (TPUv1) and how the company built the first production DSA supercomputer for the much harder problem of training, which was deployed in 2017. 

Patterson’s work on RISC, Redundant Array of Inexpensive Disks (RAID), and Network of Workstation projects helped lead to multibillion-dollar industries. In 2017, he and RISC collaborator John Hennessy shared the Association for Computing Machinery’s A.M. Turing Award — the “Nobel Prize of computing” — for pioneering a systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry.

Nov. 14: Elizabeth Spelke, Marshall L. Berkman Professor of Psychology at Harvard University and Investigator, NSF-MIT Center for Brains, Minds and Machines

Elizabeth Spelke will deliver a lecture on Nov. 14 titled, “From Core Concepts to New Systems Knowledge.” Her lecture will center on cognitive systems in young children and the ability of the human species to gain knowledge not only through gradual learning but also through a fast and flexible learning process that appears to be unique to humans and emerges with the onset of language. Although this phase of life isn’t fully understood, Spelke is using research in psychology, neuroscience and artificial intelligence to better understand human cognitive function.

Spelke explores the sources of uniquely human cognitive capacities, including the capacity for formal mathematics, for constructing and using symbolic representations, and for developing comprehensive taxonomies of objects. Conducting behavioral research on infants and preschool children, Spelke studies the origins and development of human cognition by examining how humans develop their understanding of objects, actions, people, places, numbers and geometry. She also works with computational cognitive scientists to test computational models of infants’ cognitive capacities, and to extend her research into the field with the ultimate goal to enhance young children’s learning.

Dec. 5: Ayanna Howard, Linda J. and Mark C. Smith Professor and Chair, School of Interactive Computing at the Georgia Institute of Technology

Ayanna Howard will deliver a presentation on Dec. 5 titled “Roving for a Better World.” Her talk will focus on the role of computer scientists as responsible global citizens. She will delve into the implications of recent advances in robotics and artificial intelligence, and explain the critical importance of ensuring diversity and inclusion at all stages to reduce the risk of unconscious bias and ensuring robots are designed to be accessible to all.

Prior to joining Georgia Tech, Howard was a senior robotics researcher and deputy manager in the Office of the Chief Scientist at NASA’s Jet Propulsion Laboratory. She was first hired by NASA at the age of 27 to lead a team designing a robot for future Mars exploration missions that could “think like a human and adapt to change.” While at Georgia Tech, she has served as associate director of research for the Institute for Robotics and Intelligent Machines and chair of the robotics Ph.D. program. Business Insider named her one of the most powerful women engineers in the world in 2015 and in 2018 she was named in Forbes’ Top 50 Women in Tech. 

Jan. 16: Kathleen McKeown, Henry and Gertrude Rothschild Professor of Computer Science and Founding Director, Data Science Institute, Columbia University

Kathleen McKeown will deliver a talk on Jan.16; more details about her presentation will be posted soon on our Distinguished Lecture Series page.

McKeown’s research is in natural language processing, summarization, natural language generation and analysis of social media. In these areas, her work focuses on text summarization and generating updates on disasters over live, streaming information, generating messages about electricity usage and using reinforcement learning over usage logs to determine what kinds of messages can change behavior and the analysis of social media to detect messages about aggression and loss. While at Columbia, McKeown has served as the director of the Data Science Institute, was department chair from 1998-2003 and was the vice dean for research for the School of Engineering and Applied Science for two years. McKeown is also active internationally, having served as president, vice president and secretary-treasurer of the Association of Computational Linguistics as well as a board member and secretary of the board for the Computing Research Association. 

Feb. 27: Fernando Pereira, Vice President and Engineering Fellow, Google

Fernando Pereira will deliver the Allen School’s 2020 Taskar Memorial Lecture on Feb. 27. More details about his presentation will be posted soon on our Distinguished Lecture Series page.

Pereira leads research and development at Google in natural language understanding and machine learning. Previously, he was chair of the computer and information science department at the University of Pennsylvania, head of the machine learning and information retrieval department at AT&T Labs, and held research and management positions at Scientific Research Institute International. Pereira has produced more than 120 research publications on computational linguistics, machine learning, bioinformatics, speech recognition and logic programming. He holds several patents and is widely recognized for his contributions to sequence modeling, finite-state methods, and dependency and deductive parsing.

Be sure to check our Distinguished Lecture Series page for updates throughout the season, and please plan to join us!

Read more →

Peak performance!

Dan and Galen Weld at the summit of Buck Mountain, with Glacier Peak in the background

On Saturday, Allen School Ph.D. student Galen Weld, his twin brother Adam, his father (and Allen School professor) Dan, and his mom Margaret Rosenfeld reached the 8,528-foot summit of Buck Mountain in the Glacier Peak Wilderness. With that achievement, Galen became the youngest person to summit each of the 100 highest peaks in Washington, and Dan and Galen became the first father-son team to achieve this milestone. (Dan completed his summit of Washington’s “Top 100” in 2016.)

Galen pops the cork on Champagne that Dan brought along to celebrate the achievement

Read more →

Hao Peng wins 2019 Google Ph.D. Fellowship

Hao Peng

Hao Peng, a Ph.D. student working with Allen School professor Noah Smith, has been named a 2019 Google Ph.D. Fellow for his research in natural language processing (NLP). His research focuses on a wide variety of problems in NLP, including representation learning and structured prediction.

Peng, who is one of 54 students throughout the world to be selected for a Fellowship, aims to analyze and understand the working and decisions of deep learning models and to incorporate inductive bias into the models’ design, facilitating better learning algorithms. His research provides a better understanding of many state-of-the-art models in NLP and offers more principled ways to insert inductive bias into representation learning algorithms, making them both more computation-efficient and less data-hungry. 

“Hao has been contributing groundbreaking work in natural language processing that combines the strengths of so-called ‘deep’ representation learning with structured prediction,” said Smith. “He is creative, sets a very high bar for his work, and is a delightful collaborator.”

One of Peng’s groundbreaking contributions was his research in semi-parametric models for natural language generation and text summarization. He focused on the project last summer during an internship with the Language team at Google AI. Based on his work, Peng was asked to continue his internship part-time until the team published its findings at North American Chapter of the Association for Computational Linguistics 2019.

Another important piece of research that Peng published was “Backpropagating through Structured Argmax using a SPIGOT,” which earned a Best Paper Honorable Mention at the annual meeting of the Association for Computational Linguistics in 2018. The paper proposes structured projection of intermediate gradient optimization technique (SPIGOT), which facilitates end-to-end training of neural models with structured prediction as intermediate layers. Experiments show that SPIGOT outperforms the pipelined and non-structured baselines, providing evidence that structured bias may help to learn better NLP models. Due to its flexibility, SPIGOT is applicable to multitask learning with partial or full supervision of the intermediate tasks, or inducing latent intermediate structures, according to Peng. His collaborators on the project include Smith and Sam Thomson, formerly a Ph.D. student at Carnegie Mellon University, now at Semantic Machines.

Peng has co-authored a total of eight papers in the last three years while pursuing his Ph.D. at the Allen School. The Google Fellowship will help him extend his results providing fresh theoretical justifications and better understanding of many state-of-the-art NLP models, which could yield more principled techniques to bake inductive bias into representation learning algorithms.

Peng has worked as a research intern at Google New York and Google Seattle during his studies at the UW. Prior to that he worked as an intern at Microsoft Research Asia in Beijing, and at the University of Edinburgh.

Since 2009, the Google Ph.D. Fellowship program has recognized and supported exceptional graduate students working in core and emerging areas of computer science. Previous Allen School recipients include Joseph Redmon (2018), Tianqi Chen and Arvind Satyanarayan (2016), Aaron Parks and Kyle Rector (2015) and Robert Gens and Vincent Liu (2014). Learn more about the 2019 Google Fellowships here

Congratulations, Hao!

Read more →

“Geek of the Week” Justin Chan is using smartphones to democratize medical diagnostics

Allen School Ph.D. student Justin Chan is on a mission to put the power of medical diagnostics into people’s hands, inspired by the ubiquity of smartphones coupled with advancements in artificial intelligence. Working with collaborators in the Networks & Mobile Systems Lab led by professor Shyam Gollakota and in UW Medicine, Chan has developed a mobile system for detecting ear infections in children and a contactless AI system for detecting cardiac arrest using smart devices. He co-founded Edus Health, a University of Washington spin-out that is pursuing clearance from the U.S. Food & Drug Administration to move his research out of the lab and into people’s hands and homes. His efforts have earned coverage by Scientific American, NPR, MIT Tech Review, and more — and, most recently, a feature as GeekWire’s Geek of the Week.

The motivation for Chan’s research stems from his recognition that while smart devices can combine multiple, sophisticated sensors in a battery-powered device no bigger than a pocket, the medical industry often still relies on expensive — and large — specialized devices for diagnosing patients. “I believe that everyone should be able to own their medical data. To that end, my goal is to make medical diagnostics frugal and accessible enough that anyone with a few spare parts and DIY-know-how would be able to obtain clinical-grade accuracies in the comfort of their homes,” Chan told GeekWire. “While the reality is that many diagnostic tools in healthcare often require expensive tools and specialist expertise, I am hoping we will be able to change that.”

Chan further explored the potential for smartphones and AI to transform health care in a recent article he co-authored with Drs. Sharat Raju of UW Medicine and Eric Topol of Scripps Research that appeared in The Lancet. In that article, the authors highlighted multiple examples of research aimed at using these technologies to diagnose a range of pediatric conditions in a variety of settings.

Read the full GeekWire profile here, and check out The Lancet article here.

Way to go, Justin!

Read more →

Allen School’s Samia Ibtasam receives Google Women Techmakers Scholarship

Woman in Google sweatshirt with foggy Golden Gate Bridge in background

Samia Ibtasam, a third-year Ph.D. student in the Allen School’s Information & Communication Technology for Development (ICTD) Lab, was recognized recently by Google with a Women Techmakers Scholarship. The Women Techmakers Scholars Program — formerly known as the Anita Borg Memorial Scholarship Program — aims to advance gender equality in computing by encouraging women to become active role models and leaders in the field. Ibtasam, who works with Allen School professor Richard Anderson to increase financial and technological inclusion in low-resource communities, is one of only 20 women at universities across North America to receive a 2019 scholarship.

Ibtasam’s focus on research addressing issues in developing communities and her commitment to increasing diversity in computing were both shaped by her experiences in her native Pakistan. Before her arrival at the University of Washington in 2016, Ibtasam was the founding co-director of the Innovations for Poverty Alleviation Lab (IPAL) at the Information Technology University in Lahore. During her time at IPAL, Ibtasam focused on developing diagnostic applications and information systems to support maternal, neonatal, and child health in a country which has one of the highest infant mortality rates in the world. 

Working with the government of the Pakistani province of Punjab and the United Kingdom’s Department for International Development (DFID), Ibtasam launched Har Zindagi — Urdu for “every life,” inspired by the slogan “every life matters.” The goal of the program was to revamp the child immunization system and introduce digital health records for its citizens. One of Ibtasam’s priorities was to make the immunization card machine-readable to enable public health administrators to easily and effectively monitor child vaccinations, and to educate parents, including many with low literacy, about the recommended immunization schedule. Ibtasam also led the development of an Android-based mobile app to assist vaccinators in entering information in the field, as well as a web dashboard to track immunization coverage and retention data for policymakers. Along the way, she had to contend with a number of challenges, including lack of wireless coverage in rural areas, field workers unfamiliar with smartphone technology, family displacement, and the need to coordinate among diverse stakeholders, from the World Health Organization and UNICEF, to local government officials and app developers.

For the four years she was at ITU, Ibtasam was the only woman on the computer science faculty. She is also the first Pakistani woman to formally specialize in the field of ICTD. These early-career experiences have reinforced her belief in the need to cultivate more women as leaders and role models through programs like the Women Techmakers Scholarship.

“One thing that I did and am still continuing to work on is inspiring other young women to be technical and confident. As the only female faculty member in ITU’s CS department, I saw the need and value of being a role model and source of support for young women,” Ibtasam said. “Throughout my career, I have been in many work environments — research labs, conferences, meetings, and panels — where I was the only woman. It took courage and effort to be part of them, but it also made me resilient to stay, speak up, and represent. And while at times, there were male mentors who provided advice and support, many times doors were closed to me. So, I want to hold doors open for other women through my inspiration, my mentorship, my research, and my designed technologies.”

Ibtasam has carried this theme into her work at the Allen School, where she focuses on projects that explore the availability and adoption of digital financial services by people in developing and emerging markets — an issue that impacts more than two billion people worldwide who remain unbanked. The issue also has profound implications for women’s empowerment in societies where cultural, religious, and even legal frameworks may hinder their financial independence. 

For example, Ibtasam was the lead author of a paper exploring women’s use of mobile money in Pakistan that appeared at the 1st ACM SIGCAS Conference on Computing and Sustainable Societies (COMPASS ‘18) organized by the Association for Computing Machinery’s Special Interest Group on Computers & Society. Working alongside colleagues in the ICTD Lab, the Information Technology University in Lahore, and the Georgia Institute of Technology, Ibtasam examined how gendered barriers in resource-constrained communities affect opportunities for women to improve their circumstances through the use of digital financial services. With information gleaned from more than 50 semi-structured interviews with Pakistani women and men along with data from the Financial Inclusion Insights Survey, the team identified the socioeconomic, religious, and gendered dynamics that influence the ability of women in the country to access financial services. Based on these findings, the team proposed a set of recommendations for expanding the benefits of digital financial services to more women.

The previous year, Ibtasam and co-authors presented a paper at the ACM’s 9th International Conference on Information and Communication Technologies and Development (ICTD ‘17) that evaluated how learnability and other factors beyond access can help or hinder widespread adoption of smartphone-based mobile money applications in Pakistan. In addition, Ibtasam has explored the role of mobile money in improving financial inclusion in southern Ghana, which also appeared at COMPASS ‘18, and teamed up with members of the Allen School’s Security and Privacy Research Lab to investigate the computer security and privacy needs and practices of refugees who had recently resettled in the United States. The latter appeared at the 2018 IEEE Symposium on Security and Privacy (S&P ’18) co-sponsored by the IEEE Computer Society’s Technical Committee on Security and Privacy and the International Association for Cryptologic Research.

Ibtasam and her fellow Google scholars were invited to a retreat at the company’s headquarters in Mountain View, California in June, where they had the opportunity to engage in a variety of professional development and networking activities. Previous Allen School recipients of the Google scholarship include Ph.D. alumni Kira Goldner (2016), Eleanor O’Rourke and Irene Zhang (2015), Jennifer Abrahamson and Nicola Dell (2012), Janara Christensen and Kateryna Kuksenok (2011), and Lydia Chilton and Kristi Morton (2010).

Congratulations, Samia!

Read more →

Allen School hosts Mathematics of Machine Learning Summer School

Man in glasses pointing at white board.
Kevin Jamieson at the Mathematics of Machine Learning Summer School.

In late July and early August, the Allen School was pleased to host a very successful MSRI Mathematics of Machine Learning Summer School. Around 40 Ph.D. students from across the country — and beyond — were brought to the University of Washington for two weeks of lectures by experts in the field, as well as problem sessions and social events. The lectures were also opened up to the broader UW community and recorded for future use.

Learning theory is a rich field at the intersection of statistics, probability, computer science, and optimization. Over the last few decades the statistical learning approach has been successfully applied to many problems of great interest, such as bioinformatics, computer vision, speech processing, robotics, and information retrieval. These impressive successes relied crucially on the mathematical foundation of statistical learning.

Recently, deep neural networks have demonstrated stunning empirical results across many applications like vision, natural language processing, and reinforcement learning. The field is now booming with new mathematical problems, and in particular, the challenge of providing theoretical foundations for deep learning techniques is still largely open. On the other hand, learning theory already has a rich history, with many beautiful connections to various areas of mathematics such as probability theory, high dimensional geometry, and game theory. The purpose of the summer school was to introduce graduate students and advanced undergraduates to these foundational results, as well as to expose them to the new and exciting modern challenges that arise in deep learning and reinforcement learning.

Woman pointing at white board.
Emma Brunskill

Participants explored a variety of topics with the guidance of lecturers Joan Bruna, a professor at New York University (deep learning); Stanford University professor Emma Brunskill (reinforcement learning); Sébastien Bubeck, senior researcher at Microsoft Research (convex optimization); Allen School professor Kevin Jamieson (bandits); and Robert Schapire, principal researcher at Microsoft Research (statistical learning theory).

The summer school was made possible by support from the Mathematical Sciences Research Institute, Microsoft Research, and the Allen School, with the cooperation of the Algorithmic Foundations of Data Science Institute at the University of Washington. The program was organized by Bubeck and Adith Swaminathan of Microsoft Research and Allen School professor Anna Karlin.

Learn more by visiting the summer school website here, and check out the video playlist here!

Read more →

Allen School releases MuSHR robotic race car platform to drive advances in AI research and education

Close-up shot of blue MuSHR robotic race car with #24 and Husky graphic painted in white
An example of the MuSHR robotic race car, which people can build from scratch for research or educational purposes — or just for fun. Mark Stone/University of Washington

The race is on to cultivate new capabilities and talent in robotics and artificial intelligence, but the high barrier to entry for many research platforms means many would-be innovators could be left behind. Now, thanks to a team of researchers in the Personal Robotics Laboratory at the University of Washington’s Paul G. Allen School of Computer Science & Engineering, expert and aspiring roboticists alike can build a fully functional, robotic race car with advanced sensing and computational capabilities at a fraction of the cost of existing platforms. MuSHR — short for Multi-agent System for non-Holonomic Racing — provides researchers, educators, students, hobbyists, and hackers a vehicle to advance the state of the art, learn about robotic principles, and experiment with capabilities that are normally reserved for high-end laboratories.

“There are many tough, open challenges in robotics, such as those surrounding autonomous vehicles, for which MuSHR could be the ideal test platform,” said lab director Siddhartha Srinivasa, a professor in the Allen School. “But beyond research, we also need to think about how we prepare the next generation of scientists and engineers for an AI-driven future. MuSHR can help us answer that challenge, as well, by lowering the barrier for exploration and innovation in the classroom as well as the lab.”

MuSHR is an open-source, full-stack system for building a robotic race car using off-the-shelf and 3D-printed parts. The website contains all of the information needed to assemble a MuSHR vehicle from scratch, including a detailed list of materials, ready-to-use files for 3D-printing portions of the race-car frame, step-by-step assembly instructions and video tutorials, and the software to make the car operational. 

More complex, research-oriented robot navigation systems, such as the Georgia Tech AutoRally, cost upwards of $10,000. The more budget-friendly MIT RACECAR, the original do-it-yourself platform that inspired the UW team’s project, costs $2,600 with basic sensing capabilities. By contrast, a low-end MuSHR race car with no sensing can be assembled for as little as $610, and a high-end car equipped with a multitude of sensors can be built for around $930. 

“We were able to achieve a dramatic cost reduction by using low-cost components that, when tested under MuSHR’s use cases, do not have a significant impact on system performance compared to other, more costly systems,” noted Allen School Ph.D. student Patrick Lancaster. But don’t be fooled by the relatively modest price; what’s under the hood will appeal to even seasoned researchers, as the high-end setup may be used to support even advanced research projects.

“Most robotic systems available today fall into one of two categories: either simple, educational platforms that are low-cost but also low-functionality, or more robust, research-oriented platforms that are prohibitively expensive and complex to use,” explained Allen School master’s student Johan Michalove, one of the developers of the project. “MuSHR combines the best of both worlds, offering high functionality at an affordable price. It is comprehensive enough for professional research, but also accessible enough for classroom or individual use.”

mushr.io logo and link

MuSHR comes with a set of computational modules that enable intelligent decision making, including a software and sensor package that enables localization in a known environment, and a controller for navigating towards a goal while avoiding collisions with other objects in its path. Users can build upon these preliminary AI capabilities to develop more advanced perception and control algorithms. The name “MuSHR” was inspired by the UW’s mascot, the Huskies, because it calls to mind the mushing of a dogsled at top speed. As it happens, a MuSHR race car would outpace the dogs, achieving speeds in excess of 15 miles per hour — useful for researchers working to solve critical challenges in autonomous vehicle systems, such as high-speed collision avoidance, to make them viable for real-world use.

For students with a similar need for speed, MuSHR offers an opportunity to learn about the challenges of autonomous systems in a hands-on way. Allen School Ph.D. student and MuSHR project lead Matthew Schmittle worked with students to put the platform through its paces in the school’s Mobile Robotics course this past spring, in which 41 undergraduates used MuSHR to build a complete mobile robotic system while grappling with some of the same challenges as more seasoned researchers at the likes of Tesla, Uber, and Google.

“MuSHR introduces students to key concepts such as localization, planning, and control,” Schmittle, who served as a teaching assistant for the course, explained. “The course culminated in a demo day, where students had a chance to show how well they implemented these concepts by using their MuSHR cars to navigate a wayfinding course. We hope classrooms around the country will take advantage of MuSHR to engage more students to the exciting world of robotics and AI.”

So far, the MuSHR platform has featured in two undergraduate and two graduate-level courses at the UW. Nearly 130 students have had the opportunity to explore robotic principles using MuSHR, and the Allen School plans to offer similar classes in the future.

A group of robotic race cars in different colors against a plain white background
A fleet of MuSHR cars used by students in the Allen School’s Mobile Robotics course earlier this year. Mark Stone/University of Washington

“As a roboticist, I find MuSHR’s platform for rapid testing and refinement to be an enticing prospect for future research,” Srinivasa said. “As an educator, I am excited to see how access to MuSHR will spark students’ and teachers’ imaginations. And let’s not forget the hobbyists and hackers out there. We’re looking forward to seeing the various ways people use MuSHR and what new capabilities emerge as a result.” 

Individuals who are interested in building their own MuSHR robotic race car can download the code and schematics, and view tutorials on the MuSHR website. Educators who are interested in incorporating the MuSHR platform into their classes can email the team at mushr@cs.washington.edu to request access to teaching materials, including assignments and solutions. The team will also provide a small number of pre-built MuSHR cars to interested parties who would otherwise not have the financial means to build the cars themselves, with priority given to schools, groups, and labs whose members are underrepresented in robotics. Interested individuals or labs are encouraged to submit an application by filling out this form.

MuSHR is a work in progress, and the version released today is the lab’s third iteration of the platform. The team plans to continue refining MuSHR while employing it in a variety of research projects at the UW. In addition to Srinivasa, Michalove, Schmittle, and Lancaster, members of the MuSHR team include professor Joshua R. Smith, who holds a joint appointment in the Allen School and Department of Electrical & Computer Engineering; fifth-year master’s students Matthew Rockett and Colin Summers; postdocs Sanjiban Choudhury, Christoforos Mavrogiannis, and Allen School Ph.D. alumna and current postdoc Fereshteh Sadeghi

The MuSHR project was undertaken with support from the Allen School; the Honda Research Institute; and Intel, who contributed the Realsense on-board cameras used in the construction of the vehicles.

Read coverage of MuSHR in GeekWire here.

Read more →

« Newer PostsOlder Posts »