Skip to main content

Alexa, the in-home heart monitoring AI: University of Washington researchers use smart speakers to detect signs of cardiac arrest

Photo: Sarah McQuate/University of Washington

When someone’s heart stops beating during cardiac arrest, rapid administration of cardio-pulmonary resuscitation (CPR) can save that person’s life. However, a significant number of cardiac arrest incidents take place outside of a hospital setting where help may not be immediately at hand, with around 90% of those incidents resulting in death. An estimated 300,000 people in North America alone die from out-of-hospital cardiac arrests (OHCA) each year. The vast majority of these deaths occur in the person’s home, often in the bedroom where they may be out of sight and out of earshot of potential help. But now, users of Alexa and other smart devices who may be at risk can take heart from a new artificial intelligence system developed by researchers at the Allen School and UW Medicine.

A team led by professor Shyam Gollakota of the Networks & Mobile Systems Lab and Dr. Jacob Sunshine of the Department of Anesthesiology & Pain Medicine has developed a way to turn smart speakers and smartphones into contactless heart monitoring devices capable of detecting instances of agonal breathing — an indicator that someone is suffering a cardiac arrest — with the goal of immediately alerting family members or emergency services. The system employs AI to distinguish agonal breathing from other types of breathing in real-time within a bedroom environment, even in the presence of other sounds, with 97% accuracy. The team, which includes Allen School Ph.D. student Justin Chan, first author on the paper, and Dr. Thomas Rea of UW Medicine and King County Medic One, published their results in the Nature journal npj Digital Medicine.

Agonal breathing is a distinctive type of disordered breathing that arises from a brainstem reflex in a person suffering severe hypoxia, or oxygen deprivation. Often described as a person taking gasping breaths, agonal breathing is present in roughly half of cardiac arrest cases reported to emergency services dispatchers. With the proliferation of smartphones and smart speakers like the Amazon Echo and Google Home — projected to be in 75% of U.S. households by next year — the researchers saw an opportunity to combine the capabilities of these increasingly popular devices with a distinctive audible biomarker of a life-threatening medical emergency to enable early detection and intervention, even in cases where the patient may be completely alone. In addition to private residences, the system could be deployed in unmonitored health care settings such as hospital wards, nursing homes, and assisted living facilities.

“We envision a contactless system that works by continuously and passively monitoring the bedroom for an agonal breathing event, and alerts anyone nearby to come provide CPR,” Gollakota said in a UW News release. “And then if there’s no response, the device can automatically call 911.”

The research team, clockwise from top left: Justin Chan, Shyam Gollakota, Thomas Rea, and Jacob Sunshine

The team trained its AI to recognize the sound of agonal breathing using nine years’ worth of actual 911 calls to King County Emergency Medical Services. Those calls included 19 hours of recorded instances of agonal breathing, from which the researchers extracted 236 clips. To ensure the system would be practical in a real-world setting like someone’s bedroom, the team played the clips over distances between one and six meters, with and without the addition of ambient indoor and outdoor noises that might be picked up by a smart speaker during the night.

“We played these examples at different distances to simulate what it would sound like if it the patient was at different places in the bedroom,” Chan explained. “We also added different interfering sounds such as sounds of cats and dogs, cars honking, air conditioning, things that you might normally hear in a home.”

Since the researchers envision their system ultimately being used to not only detect signs of cardiac arrest but also to summon help, Chan and his colleagues needed to minimize the chances of false positives. To that end, they trained their system to distinguish between agonal and non-agonal respiration using 83 hours of audio recordings taken during polysomnographic sleep studies. Those recordings included examples of normal breathing, snoring, hypopnea, and central and obstructive apnea — all conditions that reasonably could be expected to be picked up by a smart speaker placed in a person’s bedroom.

In addition to accounting for practical considerations, the researchers also aimed to protect user privacy. According to Gollakota, the system is intended to be deployed as an app or Alexa skill in which the data is stored locally. “It’s running in real time, so you don’t need to store anything or send anything to the cloud,” he noted.

The system is currently in the proof-of-concept stage. The next step, Gollakota says, is to obtain more 911 call data from beyond the greater Seattle area in order to further refine the algorithm and ensure that it generalizes across a broader population. The team also notes in its paper that real-world implementation would require the addition of a user interface that provides the option to cancel any false alarm before activating an emergency medical response. The team plans to commercialize its system through Sound Life Sciences, Inc., a UW spinout that is also commercializing Second Chance, a contactless mobile app developed by some of the same researchers that detects signs of an opioid overdose.

Read the journal paper here and the UW News release here. Also check out coverage of the project by Bloomberg, Digital Trends, GeekWire, IEEE Spectrum, MIT Technology Review, Fast Company, STAT, New Atlas, The Independent, USA Today, and UPI.