Skip to main content

Allen School team earns ACM PODS Alberto O. Mendelzon Test-of-Time Award for helping scientists better understand complexity behind parallel database query processing

A graphic shows side-by-side images of Paul Beame, Dan Suciu and Paris Koutris. On the left, Beame is wearing a blue shirt and black glasses and smiling in front of a forest background. In the center, Suciu is wearing a black zip sweater and posing in front of a purple background with the white letters spelling "of" behind him. To the right, Koutris, wearing black glasses and a blue shirt, poses in front of a white board with equations written on it.

A chance encounter helped Paul Beame, Paris Koutris (Ph.D., ‘15) and Dan Suciu create a model that aids scientists in understanding some of the deeper nuances surrounding big data management. Beame was passing by Suciu’s office one day as the latter and his advisee Koutris, a Ph.D. student in the Allen School at the time, were discussing research on parallel query processing, a shared idée fixe among the trio. 

They were thinking about how to model the complexity of new modes of computation used in practice, such as MapReduce, a creation of Allen School alum Jeff Dean (Ph.D., ‘96) and Sanjay Ghemawat that was developed to run major systems at Google. MapReduce, however, involved a specific set of parallel primitives that limited what types of operations could be captured. 

For Koutris and Suciu, a professor in the University of Washington’s Database Group, more was possible. They wanted a theoretical model, Suciu recalled, that could also capture other computations that could be performed more efficiently with other primitives. They also wanted to better understand “the fundamental limits” of these systems’ computing capabilities. They just needed an added spark, another stroke of inspiration. 

Along came Beame, a professor in the Allen School’s Theory of Computation group, and the rest was history. 

“It turned out that Paul had already been mulling over the general question of modeling MapReduce computations but there had been an aspect missing in his thinking that was a feature in my and Paris’ work on parallel database query processing,” Suciu said. “Even in the discussion that very first day, it became clear that the fundamental limitation that the model needed to capture was the communication of data that processors require in order to solve computational problems.” 

Beame added that the meeting was made possible by the collaboration and openness of the culture in CSE. Suciu’s door was open; his and Koutris’ minds, even more so. 

“We value collaboration and the mixing of ideas among different research areas enough that we put faculty from different areas near each other, rather than cloistering faculty in different zones by research area,” Beame said. “Dan’s office is on the way to mine just three doors away.”

Their work has helped crystallize what’s possible in the world of big data processing. Last month, the trio’s paper “Communication Steps for Parallel Query Processing” earned the 2023 ACM PODS Alberto O. Mendelzon Test-of-Time Award for providing a deeper understanding of the complexity behind database query evaluation. The award was announced at the 2023 ACM SIGMOD/PODS conference, jointly organized by the Association for Computing Machinery’s Special Interest Group on Management of Data and the Symposium on Principles of Database Systems in Seattle last month. 

Koutris joined the University of Wisconsin-Madison as a professor of computer science after graduating from the Allen School 2015.

The three continued to work together over a series of subsequent papers, investigating different questions related to their initial research. One involved data skew. By allowing additional communication rounds, the group theorized, they would be able to compute more complex queries. 

“That, indeed, turned out to be true,” Suciu said. “However, to our surprise, additional communication rounds also allowed the same query to process data that is skewed.” 

For example, Suciu added, there may be many more records referring to former president Barack Obama than records referring to a less famous individual. These “heavy hitters” pose a major problem for distributed computation, he continued, because they tend to overload a server and slow down the entire computation. 

In their Test of Time paper, the trio showed the heavy hitters’ impact could be blunted. By using additional rounds of communication, these records’ effects could be mitigated when distributed across multiple servers. 

“To our surprise,” Suciu said, “the additional communication rounds turned out to be very effective in decreasing the total cost of computation on skewed data.”

For the team, their model, dubbed Massively Parallel Communication (MPC), is yet another marker in the race toward measuring the complexity of query processing systems. During the early 2000s, the database and data processing environment underwent sweeping changes, with several systems, including MapReduce, popping up that could process vast swaths of data by using thousands of commodity systems. The process was unprecedented — and relatively cheap. 

“At around the same time Amazon provided the first cloud services, allowing users to access massively distributed systems,” Suciu said. “The traditional way to measure the complexity of a query was in terms of the number of accesses to the disk. However, these new, massively distributed systems were using a sufficiently large number of servers to store the entire data in main memory.” 

With progress came a new set of challenges. Now, the source of complexity, Suciu said, was the communication cost among the servers. It required a new theoretical approach to understand the contours of a changing landscape. 

“Once we had the model,” Beame said, “our focus was on understanding the complexity and best ways to solve natural and important classes of problems in database query evaluation.”

The team started with a simple model consisting of a large number of servers, with each featuring arbitrary computational power. The only limitation was the amount of data exchanged among the servers and the number of communication rounds.  

“Our breakthrough came when we were able to explain in this model why some queries were easier to compute than others,” Suciu said, “and to precisely characterize the communication cost of the queries.”

Their breakthrough has far-reaching implications. Thanks to the team’s research, systems developers and data analysts can better understand the limitation of processing complex queries on massively distributed systems. While it’s not the first model to attempt to capture synchronous distributed parallel computation, its elegance and simplicity have made it popular and able to withstand the test of time. 

“The MPC model was very clean and natural to understand and think about,” Beame said. “The introduction of the model was quite timely and the MPC terminology has become widely adopted and used.”