Skip to main content

Hao Peng wins 2019 Google Ph.D. Fellowship

Hao Peng

Hao Peng, a Ph.D. student working with Allen School professor Noah Smith, has been named a 2019 Google Ph.D. Fellow for his research in natural language processing (NLP). His research focuses on a wide variety of problems in NLP, including representation learning and structured prediction.

Peng, who is one of 54 students throughout the world to be selected for a Fellowship, aims to analyze and understand the working and decisions of deep learning models and to incorporate inductive bias into the models’ design, facilitating better learning algorithms. His research provides a better understanding of many state-of-the-art models in NLP and offers more principled ways to insert inductive bias into representation learning algorithms, making them both more computation-efficient and less data-hungry. 

“Hao has been contributing groundbreaking work in natural language processing that combines the strengths of so-called ‘deep’ representation learning with structured prediction,” said Smith. “He is creative, sets a very high bar for his work, and is a delightful collaborator.”

One of Peng’s groundbreaking contributions was his research in semi-parametric models for natural language generation and text summarization. He focused on the project last summer during an internship with the Language team at Google AI. Based on his work, Peng was asked to continue his internship part-time until the team published its findings at North American Chapter of the Association for Computational Linguistics 2019.

Another important piece of research that Peng published was “Backpropagating through Structured Argmax using a SPIGOT,” which earned a Best Paper Honorable Mention at the annual meeting of the Association for Computational Linguistics in 2018. The paper proposes structured projection of intermediate gradient optimization technique (SPIGOT), which facilitates end-to-end training of neural models with structured prediction as intermediate layers. Experiments show that SPIGOT outperforms the pipelined and non-structured baselines, providing evidence that structured bias may help to learn better NLP models. Due to its flexibility, SPIGOT is applicable to multitask learning with partial or full supervision of the intermediate tasks, or inducing latent intermediate structures, according to Peng. His collaborators on the project include Smith and Sam Thomson, formerly a Ph.D. student at Carnegie Mellon University, now at Semantic Machines.

Peng has co-authored a total of eight papers in the last three years while pursuing his Ph.D. at the Allen School. The Google Fellowship will help him extend his results providing fresh theoretical justifications and better understanding of many state-of-the-art NLP models, which could yield more principled techniques to bake inductive bias into representation learning algorithms.

Peng has worked as a research intern at Google New York and Google Seattle during his studies at the UW. Prior to that he worked as an intern at Microsoft Research Asia in Beijing, and at the University of Edinburgh.

Since 2009, the Google Ph.D. Fellowship program has recognized and supported exceptional graduate students working in core and emerging areas of computer science. Previous Allen School recipients include Joseph Redmon (2018), Tianqi Chen and Arvind Satyanarayan (2016), Aaron Parks and Kyle Rector (2015) and Robert Gens and Vincent Liu (2014). Learn more about the 2019 Google Fellowships here

Congratulations, Hao!