As the head of the Allen School’s Social Futures Lab, professor Amy X. Zhang wants to reimagine how social platforms can empower end users and communities to take control of their own online experiences for social good. Her research draws on the design of offline public institutions and communities to then develop new social computing systems that can help online platforms become more democratic instead of top-down, and more customizable as opposed to one-size-fits-all.
These efforts were commended by the Alfred P. Sloan Foundation, which recognized Zhang among its 2025 class of Sloan Research Fellows. The fellowship highlights early-career researchers whose innovative work represents the next generation of scientific leaders.
“It’s an honor to be recognized for my work, so that we can further the impact our lab’s research has had across social computing and human-computer interaction (HCI) research communities,” Zhang said. “This fellowship will support my research into redesigning social and collaborative computing systems to give greater agency to users and further societally good aims.”
One of Zhang’s main lines of research focuses on participatory governance in online communities. Taking inspiration from the idea of “laboratories for democracy,” she has developed tools to help users have a greater say in the policies governing their online actions. For example, Zhang and her collaborators introduced PolicyKit, an open-source computational framework that enables communities to design, carry out and implement procedures such as elections, juries and direct democracy on their platform of choice such as Slack or Reddit. Their research has prompted follow-up projects including Pika, which enables non-programmers to author a wide range of executable governance policies, and MlsGov, a distributed version of PolicyKit for governing end-to-end encrypted groups.
At the same time, she is also interested in how governance can manifest at the platform level. Because platforms are much larger than communities and have millions of users, it becomes a challenge to maintain democratic systems, Zhang explained. To address this, Zhang focuses on workflows supporting procedural justice in platform-level content moderation design. She found that using digital juries for content moderation were perceived as more procedurally just compared to existing algorithms and contract workers, but overall, expert panels had the highest perception. Because of her expertise, Zhang was invited by Facebook to give input on their Community Review program and X (formerly known as Twitter) guidance on their Community Notes program for everyday users to weigh in on potential content violations.
Another line of Zhang’s research addresses the issue of content moderation, but from a different angle. Instead of platforms moderating content, Zhang builds tools to help users “decide for themselves how they would like to customize what content they do or do not want to see.” For example, when it comes to online harassment, Zhang found that platform-level solutions do not account for the different ways that people can define and want to address harassment. Zhang helped develop the system called FilterBuddy that helps content creators who face harassment create their own filters to moderate their comment sections. She has also introduced other personalized moderation controls, customized social media feed curation algorithms as well as content labels for misinformation.
Zhang’s other research interests lie in the space of public interest technology. This includes supporting the many people, often acting in a volunteer capacity, who take on civic roles online such as spreading important information or responding to misinformation. In their research, Zhang and her team found that these people may spend significant amounts of time, effort and emotional labor often without knowing if they are making a difference. To help support their work, the team developed an augmented large language model that can address misinformation by identifying and explaining inaccuracies in a piece of content. Zhang also helped interview fact-checkers on how they prioritize which claims to verify and what tools may assist them in their work. From this research, Zhang and her collaborators introduced a framework to help fact-checkers prioritize their efforts based on the harm the misinformation could cause.
In her ongoing and future research, Zhang plans to explore how offline institutional structures can also be useful for rethinking the governance of artificial intelligence technologies.
“My long-term goal is to build social computing systems that make our online spaces as rich and varied as our offline ones, while also striving for a more pro-social, resilient and inclusive society,” Zhang said.
Zhang, in addition to being named a Sloan Research Fellow, has received Best Paper Awards at the Association of Computing Machinery CHI conference on Human Factors in Computing Systems (ACM CHI) and ACM SIGCHI Conference on Computer-Supported Cooperative Work & Social Computing (ACM CSCW), a National Science Foundation CAREER Award and a Google Research Scholar Award.
Joining Zhang in the 2025 class of Sloan Research Fellows is Allen School alum Lydia Chilton (Ph.D. ‘16), now faculty at Columbia University. Her research focuses on HCI and how AI can help people with design, innovation and creative problem-solving.
Zhang is one of three University of Washington faculty to earn Sloan Research Fellowships this year. The other honorees are Amy L. Orsborn, a professor of electrical & computer engineering and bioengineering who earned a fellowship in the neuroscience category, and chemistry professor Dianne J. Xiao.
Read more about Zhang’s research and the Sloan Research Fellowship.