Skip to main content

Sham Kakade earns Test of Time Award at ICML 2020 for novel optimization techniques that sparked new directions in machine learning research

Sham Kakade portrait

Professor Sham Kakade, a member of the Allen School’s Machine Learning and Theory of Computation groups, received a Test of Time Award at the International Conference on Machine Learning (ICML 2020) for his work on “Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design.” In the winning paper, which was originally presented at ICML 2010, Kakade and his colleagues established the first sublinear regret bounds for Gaussian process (GP) optimization in the nonparametric setting. The team’s work was lauded by the machine learning community for its technical depth and for its enduring impact on both theory and practice.

Kakade, who holds a joint faculty appointment in the Allen School and the University of Washington’s Department of Statistics and is also a senior data science fellow at the eScience Institute, co-authored the paper while a professor at the University of Pennsylvania. His collaborators on the project included Niranjan Srinivas, a Ph.D. student at the California Institute of Technology; Andreas Krause, a professor at CalTech at the time; and Matthias Seeger, then a faculty member at the Universität des Saarlandes in Germany. Together, the team set out to address an open question in machine learning: how to optimize an unknown, noisy function that is expensive to evaluate while minimizing sampling.

“We were interested in finding a principled framework for addressing the problem of Bayesian optimization, and we realized that one way to formalize this was through the theory of the sequential design of experiments,” explained Kakade. “One that I was particular excited about was how we could provide a sharp characterization of the learning complexity through a novel concept we introduced, the ‘information gain.’” 

The question has numerous applications in both laboratory and real-world settings, from determining the optimal control strategies for robots, to managing transportation and environmental systems, to choosing which advertisements to display in a sponsored web search. To answer the challenge, Kakade and his colleagues united the fields of Bayesian optimization, bandits and experimental design. The team analyzed GP optimization as a multi-armed bandit problem to offer up a novel approach for deriving cumulative regret bounds in terms of maximal information gain. In the process, they succeeded in establishing a novel connection between GP optimization and experimental design. 

By applying a simple Bayesian optimization method known as the Gaussian Process Upper Confidence Bound (GP-UCB) algorithm, the team demonstrated that they could obtain explicit sublinear regret bounds for a number of commonly used covariance functions. In experiments using real-world network sensor data, Kakade and his collaborators showed that their approach performed as well or better than existing algorithms for GP optimization which are not equipped with regret bounds. In the decade after the researchers unveiled their results, Bayesian optimization has become a powerful tool in machine learning applications spanning experimental design, hyperparameter tuning, and more. The method, proof techniques, and practical results put forward by Kakade and his colleagues have been credited with sparking new research directions and subsequently enriching the field of machine learning in a variety of ways.

Since the paper’s initial publication, Srinivas joined 10x Genomics as a computational biologist after completing a postdoc at the University of California, Berkeley, while Krause moved from CalTech to the faculty of ETH Zürich in Switzerland. Seeger is now a principal machine learning scientist at Amazon. The team is being formally recognized during the ICML 2020 conference taking place virtually this week. 

Read the award citation here, and the research paper here

Congratulations to Sham and his co-authors!

July 14, 2020