Skip to main content

Allen School launches new stackable Graduate Certificate in Modern AI Methods 

Artificial intelligence existed as both a subfield of computing and a cultural phenomenon long before ChatGPT entered the lexicon in November of 2022. While AI may be decades old, its impact on the way we work, the way we learn and, indeed, the way we live clearly has been accelerating in recent years. What isn’t clear is what comes next; regardless, a growing number of professionals across a range of industries will need the ability to understand, leverage and integrate AI and machine learning as part of their work.

Starting this fall, one option for gaining the necessary knowledge and skills will be the Allen School’s stackable Graduate Certificate in Modern AI Methods, a new part-time evening program designed with the needs of working professionals in mind.

Portrait of Taylor Kessler Faulkner
Taylor Kessler Faulkner

“This new curriculum provides students with the opportunity to gain hands-on experience and build their knowledge of best practices when it comes to widely-used AI and machine learning methods,” said instructor Taylor Kessler Faulkner. “Professionals in a wide range of fields who are interested in applying AI and ML techniques in their work will benefit from this certificate.”

That curriculum comprises four courses taught by Allen School instructors with deep expertise in the field, addressing topics such as deep learning, computer vision and natural language processing and their applications. The series culminates in a final, project-based course that invites students to put what they’ve learned into practice. 

Although the certificate is geared toward working professionals, the Allen School also welcomes applications from recent graduates who want to develop their knowledge and skills in AI. Unlike many other programs of this type, the Graduate Certificate in Modern AI Methods will be delivered in person on the University of Washington’s main campus — which will provide students with multiple benefits beyond the course content.

“Students in the certificate program will have access to UW facilities and face time with faculty and the other students in their cohort,” noted Allen School professor Luke Zettlemoyer, who is also senior research director at Meta FAIR. “It’s a great opportunity for local professionals and recent graduates without a formal education background in computer science to take graduate-level courses in the Allen School.”

Luke Zettlemoyer

Course content will be available only to students enrolled in the program. The courses are designed to be taken sequentially over twelve months, starting in September. 

For those with ambitions of earning a master’s degree, the stackable certificate in Modern AI Methods can be applied towards either of two stacked master’s degree programs currently offered at the UW: the Master of Science in Artificial Intelligence and Machine Learning for Engineering, and the Master’s of Engineering in Multidisciplinary Engineering. More stackable degree options may be added in future.

“AI is having an impact on many professions, both inside and outside of the technology industry, and that impact will continue to grow as new techniques and tools come online,” said Magdalena Balazinska, professor and director of the Allen School and holder of the Bill & Melinda Gates Chair in Computer Science & Engineering. “With the Allen School’s long history of leadership in AI, we embrace our responsibility to help students to acquire the fundamental knowledge and skills that will enable them to leverage the latest advances in their current profession or any new career path they might want to explore.”

While the program is likely to be a good match for individuals with a background in science, technology, engineering or mathematics (STEM) or mathematically-focused business degrees, holders of a bachelor’s in any field with the requisite math and programming skills are welcome to apply. However, for those with a degree in computer science or computer engineering, Kessler Faulkner says, the Allen School’s Professional Master’s Program is likely to be a better fit. Applicants can complete an online self-assessment prior to submitting their application to gauge how well their skills are a match for the certificate program.

The inaugural cohort will start in autumn 2025. The deadline to apply to be part of that cohort is August 1st. Learn more about the stackable Graduate Certificate in Modern AI Methods by visiting the Allen School website and also check out a related story in GeekWire. Read more →

Allen School researchers explore how to make online ads more accessible — and less annoying — for screen reader users

A person in a blue shirt on a laptop points at ads popping out of their screen.
(Photo by Kantima Pakdee/Vecteezy)

Even the most well-designed and accessible websites may inadvertently have inaccessible elements — advertisements. Pesky pop-ups or bothersome banner ads may be easy for many people to navigate away from, but for those who use screen readers, ads that are not developed with accessibility in mind can make browsing online a frustrating experience. 

Allen School Ph.D. student Christina Yeung alongside professors Franziska Roesner and Tadayoshi Kohno wanted to understand just how problematic inaccessible ads can be to users who rely on screen readers. By auditing how ads use, or do not use, accessible elements and pairing that with interviews with blind participants about their browsing experience, the researchers found that the overall online ad ecosystem is fairly inaccessible for users with screen readers. However, encouraging ad platforms to adhere to existing web accessibility guidelines can help make surfing the web a better experience for everyone. 

The researchers presented their paper “Analyzing the (In)Accessibility of Online Advertisements” at the 2024 ACM Internet Measurement Conference (IMC) in Madrid, Spain, last November where it received the Best Paper Award. 

“Online ads are everywhere and so pervasive. If you’re browsing on your phone, or even have an ad blocker on your laptop — you will still see ads,” lead author Yeung said. “But because ads are designed with the intent to visually tell you what’s going on, for those who are blind and use screen readers, they can be even more problematic in ways that other people might not think about on a day-to-day basis.”

Yeung and her collaborators analyzed the behavior of over 8,000 ads across 90 different websites based on how well they adhere to Web Content Accessibility Guidelines (WCAG) best practices. Over the course of a month, the team looked at whether the ads disclosed their third-party content status to screen readers as well as their use of HTML assistive attributes such as alt-text and aria-labels. These elements ensure that screen readers can perceive images and other non-text elements on the ad. They also tracked the number of interactive elements each ad had and if there was any missing text associated with links or buttons. For an ad with 15 interactive elements, someone who uses the tab key to maneuver through ads would need to press it 15 times to reach other content on the site. If an ad has a button without associated text, instead of telling the user what it does, the screen reader will just say “button.”

The researchers found that the majority of the ads contained inaccessible elements. More than half of the ads had no alt-text at all, or had empty or non-descriptive strings. Many assistive attributes included non-descriptive language such as “ad” or “image.” They also noticed that ad developers were using title attributes to provide information, contrary to WCAG guidelines. Title attributes can provide more context to specific HTML elements, appearing as a tooltip when a user hovers their mouse over the element. However, not all screen readers can consistently interact with them. 

“Inaccessible ads have two primary problems,” Yeung said. “First, people can’t differentiate what the content is, so they can’t even make the decision as to whether or not they want to interact with it. Secondly, ads that are designed poorly really do negatively impact browsing in a way that can be quite cumbersome.”

Yeung then interviewed blind participants who use screen readers to understand just how burdensome these poorly-designed ads can be. All of the participants reported that these ads both distracted and detracted from their web browsing experience as they were difficult to navigate away from. Because many ads did not disclose their third-party status, participants often had to use context clues to identify them. For example, if someone was on a news site and they suddenly hear content about furniture, they would know that the furniture content is the ad. While the researchers did not evaluate pop-up ads in the study, participants brought up how frustrating these ads are because they are difficult to close and participants struggled to get back to where they were on the page before the ad.

Only a few large companies dominate the ad landscape, so refining how they adhere to accessibility guidelines can make a noticeable difference. Major ad platforms such as Google, Yahoo and Criteo could create and enforce policies requiring ads to provide meaningful information to screen readers in the HTML attributes. They could also go a step further and develop templates that encourage using assistive attributes and reject ads with generic or missing information, Yeung explained.

“By making some fairly minor changes, we can improve the ecosystem in a way that makes browsing more equitable for everyone,” Yeung said.

Next, Yeung is looking into people’s perceptions of the data collection practices of different generative artificial intelligence companies. 

Read the full paper on ad inaccessibility.

Read more →

‘An incredible driver of economic mobility’: $3M gift from alum Armon Dadgar and Joshua Kalla will support systems research and student success

Armon Dadgar and Joshua Kalla smiling together in front of leafy trees of varying shades of yellow and green
Allen School alum Armon Dadgar (left) and Joshua Kalla have committed $3 million to Dadgar’s alma mater to create a new professorship and fund programs that support student success.

Ever since he was a student at the University of Washington, Armon Dadgar (B.S., ‘11) has had his head in the cloud. And despite co-founding the high-flying company HashiCorp after graduation, he has kept his feet firmly on the ground by finding ways to parlay his success into support for future innovators and entrepreneurs.

That success grew out of an experience Dadgar and his friend and co-founder, Mitchell Hashimoto (B.S., ‘11), had as undergraduate researchers in the Allen School’s systems research group. It was there that the two gained their first hands-on exposure to cloud computing and the challenges it posed for practitioners. At the time, cloud computing was on the rise, and of today’s three big players — Amazon, Google and Microsoft — only Amazon had officially launched its platform. But Dadgar and Hashimoto had access to all three for the aptly named Seattle Project, which aimed to leverage these emerging platforms for large-scale, peer-to-peer scientific applications. As part of the project, the duo attempted to build a software solution that would span the “multi-cloud” environment they had to work with.

They were unsuccessful on that first attempt, but according to Dadgar, the experience sparked their entrepreneurial spirit. After graduation, they moved to San Francisco and eventually decided to revisit the old research problems that had since emerged on an enterprise scale. They started HashiCorp, which became a leading provider of software for companies and organizations seeking to automate their infrastructure and security management in multi-cloud and hybrid environments. As co-founder and Chief Technology Officer, Dadgar helped grow HashiCorp to over 2,500 employees. The company counted such household names as Expedia and Starbucks among its roughly 5,000 commercial customers prior to its acquisition by IBM for $6.4 billion earlier this year, after going public in 2021.

“Major revolutions in computing, such as the public cloud, have depended on crucial research innovation in computer systems. As an undergrad in the Allen School, I was fortunate to have been exposed to research in operating systems, virtualization, networking, and more which underpins the public cloud,” said Dadgar. “Those experiences ultimately led to me founding HashiCorp. By supporting systems research, I hope for the Allen School to continue to be at the forefront of innovation in AI and beyond to inspire the next generation of students, researchers and entrepreneurs.”

A large group of students pose with Armon Dadgar in a high-ceilinged room
Inspiring the next generation: Dadgar with UW students at a DubHacks event.

Dadgar may have traded the city by the sound for the city by the bay years ago, but his affection for the UW is evergreen. Now, he and his partner, Joshua Kalla, are living in Seattle and hoping to sow the seeds of the next HashiCorp through a $3 million gift to the Allen School to support research and student success — and drive the next wave of systems innovation for the artificial intelligence era. The couple’s commitment includes $1 million to establish the Armon Dadgar & Joshua Kalla Endowed Professorship in Computer Science & Engineering, with the intent to help propel Seattle and Dadgar’s alma mater from the epicenter of cloud computing to the leading edge at the intersection of systems and AI.

“We are incredibly grateful to Armon and Josh for their generosity,” said Magdalena Balazinska, director of the Allen School and Bill & Melinda Gates Chair in Computer Science & Engineering. “The Allen School is one of the top computer science programs in the country, and an academic leader in cloud computing, systems, and AI research. But to maintain that leadership and continue to make transformational advances while educating the next generation of innovators, we need support to attract and retain the most talented faculty and students. Armon’s and Josh’s gift will greatly help us with that.”

While Dadgar is eager to give next-generation systems research a lift, he is even more enthusiastic about elevating the next generation of students entering the field. To that end, he and Kalla have committed $2 million to the Allen School Student Success Fund to support a variety of initiatives aimed at prospective and current Allen School students, with a focus on first-generation college students and K-12 students in Washington with limited access to computing education resources.

A group of seven college students stand, arms interlinked, alongside Armon Dadgar and Joshua Kalla in a conference room
“Education has always been an incredible driver of economic mobility”: Dadgar and Kalla with scholars in the UW’s Educational Opportunity Program.

“Education has always been an incredible driver of economic mobility,” said Dadgar. “Our goal is to broaden the pathways into computer science and technology, and particularly to focus on first-generation college students where we can have a multi-generational impact on both the individual and their families.”

Dadgar has repeatedly walked the talk, whether on campus or at company headquarters. At HashiCorp, he championed the creation of the Early Career Program in 2021 to enable college students of all majors and backgrounds to spend a summer at HashiCorp applying what they’ve learned in the classroom in a real-world corporate setting. More than 170 interns from across the country have benefited from the program’s mentorship and networking opportunities — over a third of whom accepted full-time positions with the company after graduation. In 2019, Dadgar and Kalla committed $3.6 million to the UW to provide scholarships to undergraduate students who participate in the university’s Educational Opportunity Program, which has supported 35 scholars to date.

As a professor at Yale University, Kalla is well aware of the impact such programs can have on students — and the institutions that provide them with that pathway to economic mobility.

“The University environment is a unique setting where students are exposed to new ideas, learn valuable skills, and through research advance the frontiers of knowledge,” said Kalla. “Creating opportunities for the next generation to participate and ultimately to lead us forward is incredibly important to us personally.”

Among the programs supported by the Student Success Fund are the Allen School Scholars Program, a one-year cohort-based program for incoming computer science and computer engineering majors focused on emerging leaders from first generation, low-income and underserved communities, and Changemakers in Computing, a summer program for rising juniors and seniors in high school to learn about computing and its societal impacts.

“We’re hugely appreciative of Armon and Josh’s extraordinary generosity, which will have a lasting impact on our program and our students,” said Ed Lazowska, professor and the Bill & Melinda Gates Chair Emeritus at the Allen School. “This gift is an opportunity to reflect on the inspirational story of HashiCorp: best friends pursuing a vision that began with some software that they built as part of an undergraduate project in the Allen School — and it will enable future generations of Allen School students to pursue their dreams.”

Read a related GeekWire story.
Read more →

‘Advancing the HCI community in Seattle and across the globe’: Allen School professor James Fogarty inducted into SIGCHI Academy Class of 2025

Headshot of James Fogarty
James Fogarty

Throughout his career, professor James Fogarty, who joined the Allen School faculty in 2006, has grown to become a central figure in Seattle’s human-computer interaction (HCI) community and beyond. His research has made key contributions in sensor-based interactions, interactive machine learning, personal health informatics and accessibility, publishing over 100 peer-reviewed papers. At the same time, he has played a pivotal role in founding and growing Design, Use, Build (DUB) — University of Washington’s cross-campus HCI alliance bringing together faculty, students, researchers and industry partners.

The ACM Special Interest Group on Computer-Human Interaction (SIGCHI) recognized Fogarty’s contributions and inducted him into the SIGCHI Academy Class of 2025. Each class represents the principal leaders of the field, whose research has helped shape how we think of HCI.

“I am honored to be among the SIGCHI Academy Class of 2025,” Fogarty said. “I’m grateful for the amazing students and collaborators that I’ve had the pleasure to work with over the years, advancing HCI, interactive machine learning, personal health informatics and accessibility research.”

Since the beginning of his career, Fogarty has made breakthroughs in HCI research. As a first-generation student, he was introduced to HCI research at Virginia Tech and was part of the first cohort of Ph.D. students in the Human-Computer Interaction Institute at Carnegie Mellon University. Key research from his dissertation, which focused on using sensor-based interactions to predict the best time to interrupt someone, received a 2005 CHI Best Paper Award. 

Upon joining the UW, Fogarty launched a new research emphasis in interaction with artificial intelligence (AI) and machine learning. Fogarty’s research into new methods for engaging end-users in machine learning training and assessment and understanding difficulties that machine learning developers encounter was considered ahead of its time. The researchers contributed to what is now known as human-AI interaction before it became a trending topic, and this line of research went on to directly impact industry guidelines for the field. 

In the same period, Fogarty and his collaborators developed Prefab, a system for real-time interpretation and enhancement of graphical interfaces through reverse engineering their pixel-level appearance. Prefab, which earned a 2010 CHI Best Paper Award, was a breakthrough in interface systems research, foreshadowing current work using AI to understand, interact with and enhance graphical interfaces. 

Fogarty then turned his research interests toward digital health, including tools to support people in self-tracking and making sense of health data. He and his collaborators provided new insights into the design of tools to support menstrual tracking, informing the design of Apple’s menstrual cycle tracking support. The research received a 2017 CHI Best Paper Award and helped expand the field’s conception of personal informatics to account for the role of design in experiences people have with their tools. Fogarty also researched the design of mobile food journals and activity-tracking visualizations as well as tools to help patients collaborate with their health providers to interpret and act on self-tracked data. More recently, Fogarty and his collaborators developed a new goal-directed approach to long-term migraine tracking, which earned them a 2024 CHI Best Paper Award, and a new tool for home-based self-monitoring of cognitive impairment in patients with liver disease, recognized with a 2025 CHI Best Paper Award. 

Outside of health tracking, Fogarty has also made important strides in accessibility research. He and his team drew inspiration from epidemiology to conduct the first large-scale assessment of accessibility in 10,000 Android apps. The Department of Justice cited the work as part of its updates to the Americans with Disabilities Act. He also extended his work on interface understanding and enhancement to demonstrate real-time repair of mobile app accessibility failures. This research helped directly motivate and inform Apple’s launch of accessibility repair in its pixel-based Screen Recognition.

“James has made an exemplary impact across research disciplines and industry,” Allen School professor Jeffrey Heer said. “His research prowess, volunteer spirit, deep care, thoughtfulness and community-mindedness have helped guide DUB and advance the HCI community in Seattle and across the globe.”

Fogarty is one of five UW faculty being recognized with ACM SIGCHI Awards this year. Department of Human Centered Design & Engineering (HCDE) professor and Allen School adjunct faculty member Kate Starbird joins Fogarty as part of the CHI Academy Class of 2025, and her work sits at the intersection of HCI and computer-supported cooperative work. 

Information School professor and Allen School adjunct faculty member Alexis Hiniker also won an ACM SIGCHI Societal Impact Award for her research into ways that consumer-facing technologies can hurt young people instead of helping them thrive. Nadya Peek and Cecilia Aragon, both HCDE professors and Allen School adjunct faculty members, were honored with ACM SIGCHI Special Recognitions. Peek was recognized for “democratizing automation through open-source hardware, building global maker communities and bridging academic research with grassroots fabrication practices,” and Aragon “for establishing human-centered data science as a new field bridging HCI and data science, demonstrating its impact through applications from astrophysics to energy systems.”

Read more about the 2025 ACM SIGCHI Awards, and see more about DUB and the UW presence at CHI 2025

Read more →

From ‘worst case’ to ‘best paper’: Allen School Ph.D. student Kyle Deeds recognized at ICDT for improving data query executions

One of the foundations of database theory is efficient query execution. On the theory side, researchers have been tackling this issue by finding the upper bounds on the query size results to determine the fundamental hardness of query execution. Other researchers have been taking a practical approach by honing query optimizers that can automatically create query evaluation algorithms based on various data properties. These two approaches have converged in the form of worst-case optimal join algorithms, which ensure the optimal execution time for query evaluations by taking into account certain statistics about the queried dataset. 

In a paper titled “Partition Constraints for Conjunctive Queries: Bounds and Worst-Case Optimal Joins,” Allen School Ph.D. student Kyle Deeds introduced an innovative statistic called partition constraints that can improve existing worst-case optimal join algorithms by capturing the latent structure in relations within the data. The rigid nature of traditional constraints used in worst-case optimal join algorithms, such as degree constraints or cardinality constraints, can end up hindering query execution speed. Instead, partition constraints extend the notion of degree constraints, enabling the relationships between database attributes to be divided into smaller and more manageable sub-relations that each have their own degree constraints.

Deeds and his collaborator Timo Camillo Merkl of TU Wien presented their research at the 28th International Conference on Database Theory (ICDT) in Barcelona, Spain, last month where their work received the Best Student Paper and Best Paper Awards.

“A core problem of database theory is query execution,” said Deeds, who is advised by professors Dan Suciu and Magdalena Balazinska in the University of Washington’s Database Group. “What this work did was present a new kind of statistic, called partition constraints, on that data that we can take into account when we are executing a query, essentially speeding up the whole process for data that has this nice structure.”

An example of a use of partition constraints is in an algorithm that records who has key card access to which rooms within a university. Faculty and students may only need to enter a few rooms such as lecture halls and offices, while security and cleaning staff may need to be able to reach many more, or maybe all, rooms. It makes sense to then partition keycard clearance by tracking the access restrictions of faculty and students, and of the various room caretakers, to efficiently determine who needs access to which rooms.

Deeds took inspiration for partition constraints from another field of computer science called graph theory, or the study of networks. Degeneracy in graph theory measures how sparse a graph is. In a network with low degeneracy, researchers can design algorithms that minimize the performance impact of highly connected vertices, which can make some graph algorithms more efficient. When it comes to partitioning and databases, this property can also help maintain database performance, even in cases where the tables have very high degrees. The researchers developed partition constraints as a declarative version of graph degeneracy for higher-arity data, or data with more attributes or fields.

“This paper is like a translation style work,” Deeds said. “The goal of this work was to translate ideas from graph theory over in a way that made sense to the database community. Once we found the right way to do the translation, things just clicked into place.”

These ideas started clicking into place when Deeds and Merkl met and began collaborating on their research.

“Kyle and Camillo first met at a previous instance of the same conference, at ICDT in 2023 in Greece. Since then they worked remotely, without any help or guidance from their respective advisors,” said Suciu, who holds the Microsoft Endowed Professorship in the Allen School. “They started from an existing, elegant concept in graph theory, called ‘degeneracy,’ and extended it to a practical method for processing relational databases. Their main idea is that, by carefully partitioning each relation into a small number of fragments, then computing a query separately on each fragment, one can significantly reduce the query’s execution time.”

For Deeds, partition constraints point to a new direction for database theory research to pursue. Partition constraints help uncover underlying and useful structures within data, and understanding the different correlations and properties that a database has can help find ways to further optimize query executions.

“The same partitioning method that Kyle and Camillo developed has many other applications beyond query evaluation. For example, in one of my research projects we are developing improved techniques for cardinality estimation, and asked Kyle to join us to adapt his method for this problem,” Suciu said. 

Outside of partition constraints, Deeds has been researching ways to optimize other programs. He introduced Galley, a system for declarative sparse tensor computing that enables users to draft efficient tensor programs without needing to make complex algorithm decisions. 

Read the full paper on partition constraints. 

Read more →

‘Pushing the field to the next stage’: Allen School professor Su-In Lee recognized as a Fellow of the International Society for Computational Biology

Su-In Lee looking off to the side with a slight smile on her face. She is holding a pen in one hand and an Allen School coffee mug in the other, seated behind an open laptop against the backdrop of a whiteboard with equations scribbled across it.
Mark Stone/University of Washington

Allen School professor Su-In Lee, who directs the University of Washington’s AI for bioMedical Sciences (AIMS) Lab, is shaping the future of biology and medicine through artificial intelligence. Her research focuses on fundamentally advancing AI and machine learning (ML) techniques to provide insights into complex biological systems and drive healthcare breakthroughs, transforming fields from basic biology to clinical medicine, including cancer biology, dermatology and critical care

Lee’s groundbreaking work has earned her a long list of accolades. Most recently, the International Society for Computational Biology (ISCB) inducted her into its 2025 Class of Fellows. These fellows lead the field with their innovative research and service, reflecting “a career of significant impact and a dedication to the scientific community.”

“I am really grateful and honored to be named an ISCB Fellow,” said Lee, who holds the Paul G. Allen Professorship in Computer Science & Engineering. “This recognition fuels my desire to contribute further and push the field to the next stage.”

The cover illustrates that combining the expertise of physicians to identify medically relevant features in dermatology images with generative machine learning enables auditing of medical-image classifiers.
Lee’s research on using generative AI and physician expertise to audit medical-image classifiers was recently featured on the cover of Nature Biomedical Engineering.

One of Lee’s most pivotal contributions to the field is her work on the SHAP, or SHapley Additive exPlanations, values. The technique uses a game theory approach to help explain the output of an ML model. Her research using SHAP techniques tackles the accuracy versus interpretability problem. Simpler models can be more interpretable at the expense of being less accurate, but complex models are more accurate but difficult to interpret. Instead, Lee and her collaborators balance the two to develop high-performance, expressive models that are effective for biomedicine.

Using SHAP values as a foundation, Lee and her collaborators developed a novel framework called Prescience that could predict and also explain a patient’s risk of developing hypoxemia during surgery. Their follow-up research CoAI, also known as Cost-Aware Artificial Intelligence, used SHAP values to reduce the time, effort and resources needed to predict patient outcomes and inform the treatment plan. Lee has also used SHAP values to understand factors influencing aging to better treat age-related disorders using the ENABL Age, or ExplaiNAble BioLogical Age, framework. 

Lee has also developed explainable AI methods that can analyze biological data and find new therapeutic targets for diseases. She and her colleagues introduced ConstrastiveVI, a deep learning framework that applies contrastive analysis to single-cell datasets that help separate variations in the target cells from the healthy control cells during experiments. For Lee, explainable AI also has the potential to identify novel treatments for Alzheimer’s disease, which is one of the 10 leading causes of death in the U.S., and other neurodegenerative conditions. Lee has also used explainable AI to audit and dissect the reasoning process that AI systems use to analyze medical images, such as chest X-rays for predicting COVID-19 status or dermatological images for detecting melanoma.

“Su-In’s body of work has completely transformed how people analyze and interpret AI/ML models. Her SHAP approach is used by essentially the entire field, not only in computational biology but all of computer science,” said Trey Ideker, professor of medicine, bioengineering and computer science at the University of California San Diego.

In addition to her research, Lee has helped promote interdisciplinary education and collaboration. Since 2020, she has served as director of UW’s Computational Molecular Biology program. Lee has also played a pivotal role in establishing the Medical Scientist Training Program and Allen School collaboration to train Ph.D. students in both computer science and clinical medicine. She has also taken her leadership skills beyond UW as co-founder and co-chair of the Machine Learning in Computational Biology conference.

Lee’s fellowship follows on the heels of another ISCB recognition. For her contributions to AI and biomedicine, she received the prestigious 2024 ISCB Innovator Award, honoring leading mid-career computational biologists. Last year, she was the first woman to win the Samsung Ho-Am Prize Laureate in Engineering, often referred to as the “Korean Nobel Prize,” and was also named a Fellow of the American Institute for Medical and Biological Engineering

Having transitioned from studying computer science to biology herself, Lee understands the challenges Allen School students may face in researching biomedical applications, but also the rewards.

“It would be great for the field, the world and science in general if more computer science students pursue computational biology,” Lee said. 

Read more about this year’s ISCB Fellows and Lee’s research.

Read more →

Designing hidden Mona Lisas: Allen School researchers use computation to weave new capabilities for illusion knitting 

Examples of computational illusion knit design including a gray slate that transitions to the Mona Lisa, and an image of Vincent Van Gogh's sunflowers shifting into his self portrait.
Examples of computational illusion knit design including a gray slate that transitions to the Mona Lisa, and an image of Vincent Van Gogh’s sunflowers shifting into his self portrait.

When you look at a knitted panel head on, all you might see is gray noise, but taking a step to the side and looking at an angle reveals another image — a hidden Mona Lisa. 

This technique is called illusion knitting, where stitches of varying heights can create different images depending on what angle you view the piece. Traditionally, the intricate process of designing these illusions, by selecting stitches, choosing their colors and deciding whether they are raised or not was done by hand. Now, a team of Allen School researchers have introduced computational illusion knitting. The design framework helps automate the process by computationally generating knitting patterns for input designs, making illusion knitting more accessible and allowing for more complex and multi-view patterns that were previously impossible. The researchers presented the paper at the SIGGRAPH 2024 conference.

Headshot of Allen School Ph.D. student Amy Zhu
Allen School Ph.D. student Amy Zhu

“If we write down the mathematical properties of the illusion knits, then we have a foundation with which to create algorithms that help the user iterate on and optimize their designs,” lead author and Allen School Ph.D. student Amy Zhu said. “With this foundation you can start to think, ‘where can I push the boundaries of what’s possible with illusion knitting?’”

To create an illusion knit object, the researchers first characterize the design’s unique microgeometry, or the small-scale geometry on the surface of a knitted piece. The goal is to then add a layer of abstraction over the complex microgeometry to streamline the design process, Zhu explained. This layer uses logical units called bumps, or raised sections made by a knit with a purl in the next row below it, that can block previous stitches, as well as flats, where the knit surface is level and is made by a knit stitch with another in the next row. 

These observations led the researchers to develop various constraints that capture both the viewing behavior, or what image is seen at each angle, and the physical behavior, also known as the result from the knitted design or fabrication choices. Single image illusion knits, such as the hidden Mona Lisa, are easier to design as they only change the color between rows.

Using computational techniques, the team became the first to design and execute illusion knitting patterns that require mixed colorwork and texture between rows. The researchers first used automated methods such as gradient descent and the maximum satisfiability problem (MaxSAT) to find a compromise that satisfies the most constraints and relaxes others, Zhu explained. However, these systems are still limited by how well they can express constraints about readability, knittability as well as how accurate the theoretical model is to reality.

A knit image of Edvard Munch's "The Scream" quantized by diffusion model on the left, and one quantized by hand on the right.
A knit image of Edvard Munch’s “The Scream” quantized using a diffusion model on the left, and one quantized by hand on the right.

Instead of relying solely on the algorithm for solutions, the researchers created a user-in-the-loop framework that allows the user to easily edit the design. The user can then further simplify the design by breaking it down into tiles made of bumps and flats to fill the image.

“There’s many tricky things to consider when you’re in fabrication land that are often difficult to anticipate or capture when working theoretically — maybe the machine settings are wrong and the white stitches are smaller, making the contrast between colors lower,” Zhu said. “One reason I like our approach is that we put the fabricated reality first, so our priority is providing ways for the user to edit things that come out of the machine, so they can observe if something didn’t look right and know how to fix it.”

For example, using a knitting machine and this design framework, Zhu created a knit piece that shows artist Vincent Van Gogh’s self portrait from one angle and his sunflowers from another. The quantized input image of the self portrait had too much noise around his head which made the design unreadable when knit. For a cleaner result, all she had to do was edit the input image to remove the noise or modify the tiles.

Zhu said she hopes this framework gives users the foundation to create more innovative illusion knit designs such as clothes that display images with certain poses or knit animations. Next, Zhu is researching ways to capture and explore different fabrication plans of knit objects.

“Approaching crafts from a computer science angle allows us to expand the boundaries of what can be achieved — whether that’s developing new algorithms for machine knitting, creating novel design tools or even inventing new forms of visual art,” said senior author and Allen School professor Zach Tatlock, who co-advises Zhu alongside colleague Adriana Schulz. “It’s fascinating to see how the rigor of computer science can enhance the creativity of traditional crafts.”

Additional authors include Allen School Ph.D. students in the Graphics and Imaging Laboratory (GRAIL) Yuxuan Mei and Benjamin Jones, along with Schulz.

Read the full paper on computational illusion knitting. 

Read more →

Allen School’s Shangbin Feng and Rock Yuren Pang earn IBM Ph.D. Fellowships for seeking to change the narrative around AI

Allen School Ph.D. student Shangbin Feng envisions the work of large language models (LLMs) as a collaborative endeavor, while fellow student Rock Yuren Pang is interested in advancing the conversation around unintended consequences of these and other emerging technologies. Both were recently honored among the 2024 class of IBM Ph.D. Fellows, which recognizes and supports students from around the world who pursue pioneering research in the company’s focus areas. For Feng and Pang, receiving a fellowship is a welcome validation of their efforts to challenge the status quo and change the narrative around AI.

Shangbin Feng: Championing AI development through multi-LLM collaboration

Headshot of Allen School Ph.D. student
Shangbin Feng

For Shangbin Feng, the idea of singular general purpose LLM can be difficult, the same way there are no “general purpose” individuals. Instead, people have varied and specialized skills and experiences, and they collaborate with each other to achieve more than they could on their own. Feng adapts this idea into his research into multi-LLM collaboration, helping to develop a range of protocols enabling information exchange across LLMs with diverse expertise.

“I’m super grateful for IBM’s support to advance multi-LLM collaboration, challenging the status quo with a collaborative and participatory vision,” Feng said. “As academic researchers with limited resources, we are not powerless: we can put forward bold proposals to enable the collaboration of many in AI development, not just the resourceful few, such that multiple AI stakeholders can participate and have a say in the development of LLMs.”

Feng’s research focuses on developing different methods for LLM collaboration. He introduced Knowledge Card, a modular framework that helps fill in information gaps in general purpose LLMs. The researchers augmented these LLMs using a pool of domain-specialized small language models that provide relevant and up-to-date knowledge and information. Feng and his collaborators then proposed a text-based approach, where multiple LLMs evaluate and provide feedback on each others’ responses to identify knowledge gaps and improve reliability. Their research received an Outstanding Paper Award at the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024). 

He also helped develop the framework Modular Pluralism that enables aggregation of community language models representing the preferences of diverse populations at the token probabilities level. Most recently, Feng proposed the collaborative search algorithm called Model Swarms. In this weight-level approach, diverse LLM experts collectively move in the parameter search space using swarm intelligence.

Outside of multi-LLM collaboration, Feng worked alongside his Ph.D. advisor, Allen School professor Yulia Tsvetkov, to develop a framework that evaluates pretrained natural language processing (NLP) models for their political leanings in an effort to combat these biases. Their work received the Best Paper Award at ACL 2023.

In future research, Feng plans to focus on ways to reprocess and recycle the over one million publicly available LLMs.

“It is often costly to retrain or substantially modify LLMs, thus reusing and composing existing LLMs would go a long way to reduce the carbon footprint and environmental impact of language technologies,” Feng said.

Rock Yuren Pang: Uncovering unintended consequences from emerging technologies

Headshot of Allen School Ph.D. student Rock Yuren Pang
Rock Yuren Pang

As AI becomes more commonplace and integrated into different sectors of society, Rock Yuren Pang wants to help researchers and practitioners grasp the potential adverse effects of these emerging technologies.

“I work in the intersection of human-computer interaction (HCI) and responsible AI,” Pang said. “Through the IBM fellowship, I’ll continue to design systems and sociotechnical approaches for researchers to anticipate and understand the unintended consequences of our own research products, especially with fast-growing AI advancements. I’d like to change the narrative that doing so is a burden, but rather a fun and rewarding experience to communicate potential risks as well as the benefits for many diverse user populations.”

While tracking the downstream impacts of AI and other technologies can be overwhelming, Pang has worked with his Ph.D. advisor, Allen School professor Katharina Reinecke, to introduce tools to help make it more manageable. In a white paper dubbed PEACE, or “Proactively Exploring and Addressing Consequences and Ethics,” Pang and his collaborators propose a holistic approach that both makes it easier for researchers to access resources and support to predict and mitigate unintended consequences of their work, and also intertwine these concerns into the school’s teaching and research. Their work is supported through a grant from the National Science Foundation’s Ethical and Responsible Research (ER2) program. 

He also helped develop Blip, a system that consolidates real-world examples of undesirable impacts of technology from across online articles. The system then summarizes and presents the information in a web-based interface, assisting researchers in identifying consequences that they “had never considered before,” Pang explained. Most recently, Pang has been investigating the growing influence of LLMs on HCI research.

Outside of his work in responsible AI, Pang is also interested in accessibility research. As part of the Center for Research and Education on Accessible Technology and Experiences (CREATE), he and his team developed an accessibility guide for data science and other STEM classes. Their work was honored last year as part of the UW IT Accessibility Task Force’s Digital Accessibility Awards. Pang also contributed to AltGeoViz, a system that allows screen-reader users to explore geovisualizations by automatically generating alt-text descriptions based on their current map view. The team received the People’s Choice Award at the Allen School’s 2024 Research Showcase.

In the future, Pang aims to design more detailed guidelines for addressing potential unintended consequences from AI, and novel interactions to collaborate with AI.

Read more about the IBM Ph.D. Fellowships.

Read more →

Allen School professor Amy X. Zhang receives Sloan Research Fellowship for empowering users to make ‘our online spaces as rich and varied as our offline ones’

Portrait of Allen School professor Amy X. Zhang
Allen School professor Amy X. Zhang (Photo by University of Washington)

As the head of the Allen School’s Social Futures Lab, professor Amy X. Zhang wants to reimagine how social platforms can empower end users and communities to take control of their own online experiences for social good. Her research draws on the design of offline public institutions and communities to then develop new social computing systems that can help online platforms become more democratic instead of top-down, and more customizable as opposed to one-size-fits-all.

These efforts were commended by the Alfred P. Sloan Foundation, which recognized Zhang among its 2025 class of Sloan Research Fellows. The fellowship highlights early-career researchers whose innovative work represents the next generation of scientific leaders.

“It’s an honor to be recognized for my work, so that we can further the impact our lab’s research has had across social computing and human-computer interaction (HCI) research communities,” Zhang said. “This fellowship will support my research into redesigning social and collaborative computing systems to give greater agency to users and further societally good aims.”

One of Zhang’s main lines of research focuses on participatory governance in online communities. Taking inspiration from the idea of “laboratories for democracy,” she has developed tools to help users have a greater say in the policies governing their online actions. For example, Zhang and her collaborators introduced PolicyKit, an open-source computational framework that enables communities to design, carry out and implement procedures such as elections, juries and direct democracy on their platform of choice such as Slack or Reddit. Their research has prompted follow-up projects including Pika, which enables non-programmers to author a wide range of executable governance policies, and MlsGov, a distributed version of PolicyKit for governing end-to-end encrypted groups.

At the same time, she is also interested in how governance can manifest at the platform level. Because platforms are much larger than communities and have millions of users, it becomes a challenge to maintain democratic systems, Zhang explained. To address this, Zhang focuses on workflows supporting procedural justice in platform-level content moderation design. She found that using digital juries for content moderation were perceived as more procedurally just compared to existing algorithms and contract workers, but overall, expert panels had the highest perception. Because of her expertise, Zhang was invited by Facebook to give input on their Community Review program and X (formerly known as Twitter) guidance on their Community Notes program for everyday users to weigh in on potential content violations.

Another line of Zhang’s research addresses the issue of content moderation, but from a different angle. Instead of platforms moderating content, Zhang builds tools to help users “decide for themselves how they would like to customize what content they do or do not want to see.” For example, when it comes to online harassment, Zhang found that platform-level solutions do not account for the different ways that people can define and want to address harassment. Zhang helped develop the system called FilterBuddy that helps content creators who face harassment create their own filters to moderate their comment sections. She has also introduced other personalized moderation controls, customized social media feed curation algorithms as well as content labels for misinformation.

Zhang’s other research interests lie in the space of public interest technology. This includes supporting the many people, often acting in a volunteer capacity, who take on civic roles online such as spreading important information or responding to misinformation. In their research, Zhang and her team found that these people may spend significant amounts of time, effort and emotional labor often without knowing if they are making a difference. To help support their work, the team developed an augmented large language model that can address misinformation by identifying and explaining inaccuracies in a piece of content. Zhang also helped interview fact-checkers on how they prioritize which claims to verify and what tools may assist them in their work. From this research, Zhang and her collaborators introduced a framework to help fact-checkers prioritize their efforts based on the harm the misinformation could cause.

In her ongoing and future research, Zhang plans to explore how offline institutional structures can also be useful for rethinking the governance of artificial intelligence technologies.

“My long-term goal is to build social computing systems that make our online spaces as rich and varied as our offline ones, while also striving for a more pro-social, resilient and inclusive society,” Zhang said.

Zhang, in addition to being named a Sloan Research Fellow, has received Best Paper Awards at the Association of Computing Machinery CHI conference on Human Factors in Computing Systems (ACM CHI) and ACM SIGCHI Conference on Computer-Supported Cooperative Work & Social Computing (ACM CSCW), a National Science Foundation CAREER Award and a Google Research Scholar Award.

Joining Zhang in the 2025 class of Sloan Research Fellows is Allen School alum Lydia Chilton (Ph.D. ‘16), now faculty at Columbia University. Her research focuses on HCI and how AI can help people with design, innovation and creative problem-solving.

Zhang is one of three University of Washington faculty to earn Sloan Research Fellowships this year. The other honorees are Amy L. Orsborn, a professor of electrical & computer engineering and bioengineering who earned a fellowship in the neuroscience category, and chemistry professor Dianne J. Xiao.

Read more about Zhang’s research and the Sloan Research Fellowship

Read more →

Allen School professors Brian Curless and Jeffrey Heer named ACM Fellows for helping transform how we use computing technologies

Statue of the University of Washington "W"
Photo by Dennis Wise/University of Washington

Each year, the Association for Computing Machinery (ACM) recognizes the top 1 percent of its members who have made notable contributions to the field of computing science and technology as ACM Fellows

Allen School professors Brian Curless and Jeffrey Heer were among the 55 ACM fellows who will be recognized at a ceremony in June in San Francisco, California. The 2024 inductees’ contributions range from 3D scanning to interactive machine learning and much more. Their work has helped transform how we use computing technologies today.

Brian Curless: “For contributions to 3D shape and appearance capture and reconstruction and to computational photography.”

Portrait of Brian Curless
Brian Curless

Even before he joined the Allen School faculty in 1998, professor Brian Curless has been at the forefront of research into computer graphics and vision. In the ACM SIGGRAPH’s Seminal Graphics Papers Volume 2, a 2023 SIGGRAPH 50th anniversary collection of pioneering papers, he contributed six of the 88 featured papers. He co-directs the UW Reality Lab, alongside colleagues Steve Seitz and Ira Kemelmacher-Shlizerman, and is also a part of the Allen School’s Graphics and Imaging Laboratory (GRAIL). 

The ACM commended Curless for his “contributions to 3D shape and appearance capture and reconstruction and to computational photography.”

“It is an incredible honor to be named an ACM Fellow,” Curless said. “This recognition is as much mine as it is the amazing collaborators, both at the Allen School and beyond, that I’ve worked with over the years researching computational photography, 3D scanning and more.”

Curless’ work has helped lay the groundwork for major milestones in 3D scanning research. He invented what is now the standard for merging range images together to reconstruct 3D surfaces such as the ridges and scales in a detailed dragon model. In addition, previous methods for extracting range data from optical triangulation scanners, which use lasers to shine light on an object and then measure the reflection to estimate its shape, were often not accurate for surfaces that are curved, discontinuous or had varying degrees of reflectance. Curless introduced a new and more accurate method using spacetime analysis. His volumetric 3D reconstruction research has been used for feature film development, and has since then been incorporated into Google’s Tango augmented reality (AR) computing platform and Microsoft’s Hololens.

The volumetric reconstruction method was also key to the success of the Digital Michelangelo Project, the first, detailed 3D computer archive of the Renaissance artist’s sculptures including the statue “David.” The project led to more research into using 3D scanning for archaeological preservation.

His work in 3D scanning technology extends beyond digitizing artwork. Curless has pioneered methods for using large 3D datasets to create accurate human body models to fit the general population, ushering in new research in data-driven human body shape modeling. Further, he and collaborators combined 3D scanning with traditional photography to introduce a new paradigm — surface light fields — to concisely represent the view-dependent appearance of objects.

Outside of 3D shape scanning, Curless has made strides in computational photography. He helped develop the interactive digital photomontage method, a technique for combining multiple images together without visible seams. The technique was featured in Adobe Photoshop CS4 as well as in Google Maps to create satellite imagery. Its interface for stitching together the best parts of photos has led to Google Pixel phone’s “Best Take” feature and “Photomerge Group Shots” in Adobe Photoshop Elements 10.

Curless, along with his collaborators, also introduced a new method for solving the matting problem, or how to extract foreground elements from a background image. He has taken this technique further with real-time background matting for streaming video. 

In addition to being named an ACM Fellow, Curless has also received the National Science Foundation CAREER Award, Alfred P. Sloan Foundation Faculty Fellowship and the UW ACM Teaching Award.

“Brian is a pioneer in 3D shape scanning and computational photography,” said Seitz. “He invented techniques that became widely used both in academia and industry, and it’s amazing to see him recognized by this major honor.”

Jeffrey Heer: “For contributions to information visualization, human-centered data science and interactive machine learning.”

Portrait of Jeffrey Heer
Jeffrey Heer

Allen School professor Jeffrey Heer, who co-directs the UW Interactive Data Lab alongside colleague Leilani Battle, has published over 120 peer-reviewed papers and has developed some of the most popular open-source visualization tools used by developers around the world. Heer’s interest in visualization began during his undergraduate studies at University of California, Berkeley, and since then, his work has already helped change the way that people interact with data, charts and graphs. But for Heer, “there’s still so much to do.”

“I’m honored to be named an ACM Fellow, and I am particularly grateful to my amazing students and collaborators over the years,” said Heer, who holds the Jerre D. Noe Endowed Professorship in the Allen School. “Whether visualizing complex information, wrangling data into shape, authoring analyses or making sense of statistical and machine learning results, together we’ve helped make it easier for people to work with, make sense of and communicate data more effectively.”

The ACM recognized Heer for “contributions to information visualization, human-centered data science and interactive machine learning.”

Heer is best known for his work on interactive information visualization. Building interactive visualizations requires knowledge in fields such as user interface development, graphic design as well as algorithmic implementation. Over the years, Heer and his collaborators introduced widely-adopted visualization languages such as Prefuse, Protovis, D3, Vega, Vega-Lite and Mosaic that have become the standard in fields like industry, data science and journalism. 

These data visualization tools have built the foundation for other research from new interactive tools for exploratory data analysis to automating visualization design and recommendation. His work has had lasting impact; he has won three consecutive Test of Time Awards for papers published in IEEE Transactions on Visualization and Computer Graphics (IEEE VIS). In 2020, he was recognized for his work on narrative visualization, in 2021 for D3 that made data visualization more mainstream and in 2022 for helping data analysts understand challenges they face in their workflows. Heer’s research has helped push the field of interactive visualization from simple graphs to more complex visualizations that are now easier to build.

Heer’s work, however, is not limited to visualization. He has collaborated with domain scientists on ways to use data analysis in different fields from studying red harvester ant colonies to understanding the development and well-being of transgender children. Alongside his collaborators, Heer has combined natural language processing, HCI and visualization for projects such as designing new mixed-initiative language translation technologies and identifying cycles of opioid addiction and recovery in online patient-authored text. Outside of research, Heer co-founded the company Trifacta, which provides interactive tools for scalable data transformation that was acquired by Alteryx in 2022. 

“I am incredibly excited that Jeffrey Heer is being recognized by the ACM for the tremendous impact he has had in the areas of Visualization and Human-Computer Interaction,” said Maneesh Agrawala, computer science professor at Stanford University. “His work has opened new directions for research on things like narrative in visualization, data preparation for analysis and grammars for visualization. It has also had immense impact beyond research with his open source tools like D3 and Vega regularly used by developers worldwide.”

Since joining the Allen School faculty in 2013, Heer has received the ACM’s Grace Murray Hopper Award, the 2017 IEEE Visualization Technical Achievement Award and Best Paper Awards at the ACM Conference on Human Factors in Computing (CHI), EuroVis and IEEE InfoVis conferences. He was also inducted into the IEEE Visualization Academy in 2019 — one of the most prestigious honors in the field of visualization.

Read more about the 2024 ACM Fellows.

Read more →

« Newer PostsOlder Posts »