Understanding how different parts of the brain communicate is like trying to follow conversations at a crowded party — many voices overlap, some speakers are far away and others might be hidden entirely. Neuroscientists face a similar challenge: even when they can record signals from multiple brain regions, it is difficult to figure out who is “talking” to whom and what is being said.
In a recent paper published at the 2025 International Conference on Machine Learning (ICML), a team of researchers led by Allen School professor Matt Golub developed a new machine learning technique to cut through that noise and identify communication between brain regions. The technique, called Multi-Region Latent Factor Analysis via Dynamical Systems (MR-LFADS), uses multi-region neural activity data to decode how different parts of the brain talk to each other — even when some parts can’t be directly observed.
“The many regions within your brain are constantly talking to each other. This communication underlies everything our brains do for us, like sensing our environment, governing our thoughts, and moving our bodies,” said Golub, who directs the Systems Neuroscience & AI Lab (SNAIL) at the University of Washington. “In experiments, we can monitor neural activity within many different brain regions, but these data don’t directly reveal what each region is actually saying — or which other regions are listening. That’s the core challenge we sought to address in this work.”
Unlike existing approaches, MR-LFADS is able to automatically account for unobserved brain regions. For example, neuroscientists can use electrodes to simultaneously monitor the activity of large populations of individual neurons across multiple brain regions. However, this activity may be influenced by neurons and brain regions that are not being recorded, explained Belle Liu, UW Department of Neuroscience Ph.D. student and the study’s lead author.
“Imagine trying to understand a conversation when you’re not able to hear one of the key speakers. You’re only hearing part of the story,” Liu said.
To overcome this, the team devised a custom deep learning architecture to detect when a recorded region reflects an unobserved influence and to infer what the unobserved region was likely saying.
“We wanted to make sure the model can’t just pipe in any unobserved signal that you might need to explain the data,” said co-author and Allen School postdoc Jacob Sacks (Ph.D., ‘23). “Instead, we figured out how to encourage the model to infer input from unobserved sources only when it’s very much needed, because that information can’t be found anywhere else in the recorded neural activity.”
The team tested MR-LFADS using both simulated brain networks and real brain data. First, they designed simulated multi-region brain activity that reflected complicated scenarios for studying brain communication, such as giving each region unique signals from both observed and unobserved sources. For the model, the challenge is to recover those signals and to disentangle the source of those signals and whether they come from observed regions — and if so, which ones — or unobserved regions. The researchers found that their model was able to infer this communication more accurately than existing approaches. When applied to real neural recordings, MR-LFADS could even predict how disrupting one brain region would impact another — effects that it had never seen before.
By helping neuroscientists better map brain activity, this model can help provide insights into treatments for various brain disorders and injuries. For example, different parts of the brain communicate in certain ways in healthy individuals, but “something about that communication gets out of whack in a diseased state,” explained Golub. Understanding when and how that communication breaks down might enable the design of therapies that intervene in just the right way and at just the right time.
“Models and techniques like these are desperately needed for basic neuroscience to understand how distributed circuits in the brain work,” Golub said. “Neuroscientists are rapidly improving our ability to monitor activity in the brain, and these experiments provide tremendous opportunities for computer scientists and engineers to model and understand the intricate flow of computation in the brain.”
Read the full paper on MR-LFADS here.

