Skip to main content

Allen School’s Sewon Min is taking natural language processing to the next level to tackle real-world problems

Portrait of Sewon Min against periwinkle background

When Sewon Min first arrived at the University of Washington as an exchange student in the fall of 2016, little did she know how those three months would change the course of her academic career. After completing a brief stint as an undergraduate research assistant under the guidance of Allen School professors Hannaneh Hajishirzi and Ali Farhadi, she returned to Seoul National University in Korea to complete her bachelor’s degree. In the interim, Hajishirzi “tried really hard” to convince Min to move back to the Pacific Northwest to begin her Ph.D. — an effort that ultimately led to a happy reunion with her former advisor as well as a new mentor in professor Luke Zettlemoyer.

In the end, it doesn’t seem like Min took much convincing.

“It was very clear that returning to the Allen School for my Ph.D. would be the right choice for me. I loved every interaction I had with Hanna and Ali,” recalled Min. “All of our discussions and their suggestions opened up a lot of new directions. Oftentimes, when I was stuck, I would leave our meeting excited about new ideas I could try. Also, Hanna has been a world-leading expert in the topic I’ve been especially excited about — question answering — and I had a strong desire to continue working on that.”

Since her arrival, Min has wasted no time in establishing herself as one of the most promising up and coming researchers in the field. She has published more than 15 papers at the premier NLP conferences that have earned more than 1,700 citations combined. Now in her fourth year at the Allen School, Min recently earned a 2022 JP Morgan Ph.D. Fellowship in artificial intelligence to build on her already impressive record of leading the field of natural language processing in new — and sometimes unexpected — directions. 

“Sewon is a rising star who aims to take NLP paradigms to the next level. She not only pushes performance for long-standing hard problems, but also blazes entirely new directions for the field,” said Hajishirzi, who splits her time between the Allen School’s H2Lab and the Allen Institute for AI, where she is a senior research manager. “Sewon’s work is both technically sophisticated and highly impactful, and she continues to push the boundaries of what natural language models can do for a range of real-world applications.”

Min has proven particularly adept at pushing boundaries through her work on question answering. Whereas much of the previous research in this area has focused on restricted questions — ones for which a single answer can be extracted from a given document — Min has chosen to focus on broadening the capabilities of NLP models to respond to questions more akin to those posed by humans.

“My goal is to build a system that understands and can reason about natural language at a level that will help people solve problems they face in their daily lives, from answering their queries, to detecting false information on the internet,” explained Min. “Current systems assume a well-defined user query and a single, definite answer, but that’s not how the human quest for knowledge works! People ask ambiguous and open-ended questions, sometimes built on false presuppositions and requiring complex processing and reasoning about real-world conditions.”

One of Min’s early research breakthroughs was in multi-hop question answering — that is, questions that require reasoning about one or more facts to arrive at the correct answer. In a paper that appeared at the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), Min and her collaborators proposed a novel approach to the problem that involves decomposing multi-hop questions into their component parts. Their system, DecompRC, generates sub-questions that can be answered by state-of-the-art single-hop QA models, and then chains the answers to arrive at the correct response to the original question. The technique proved to be both clever and efficient; even with the benefit of only 400 labeled training examples — a relatively miniscule amount of data in machine learning terms — Min and the team demonstrated that DecompRQ could generate high-quality sub-questions on a par with human-authored ones.

Another area in which Min has already made significant contributions is open-ended question answering. She became interested in exploring how to bestow models with the ability to field questions that are inherently ambiguous after realizing the extent to which existing models make faulty assumptions about the nature of the questions real-world users would ask. For example, after examining a corpus of over 14,000 questions based on Google search queries, Min found more than half to be ambiguous in their references to events, entities, time dependency or other factors. Given that ambiguity can be difficult for both machines and humans to spot, she conceived of a new open-ended QA task, AmbigQA, in which the model has to retrieve every plausible answer based on the various potential interpretations of what the questioner was searching for, and then produce a disambiguated question for each answer. 

Min and her colleagues presented their results at the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), where it caught the attention of the NLP community as an example of how to effectively model ambiguity as well as study other challenges associated with more realistic open-domain question answering. According to her co-advisor, Zettlemoyer, Min’s work on this and several other projects illustrates her dedication to not only developing solutions to important problems, but also to making sure they are the right problems to solve in the first place.

“Sewon has an uncanny ability to figure out what methods will work well in practice while opening up entirely new ways of thinking about problems,” said Zettlemoyer, who is also a research director at Meta AI. “Beyond her technical contributions, what really sets Sewon apart is how she considers problems as a whole and pushes the machine learning community into more realistic and open-ended territory instead of focusing narrowly on well-trodden challenges.”

With support from the JP Morgan Ph.D. Fellowship, Min is eager to continue down the path less traveled as she continues to grapple with an open-ended question of her own.

“How do we build models that can deal with that level of ambiguity and imperfection — and do so in a way that is computationally efficient?” she asked. “That is the problem that I’m trying to solve.”

Min is one of 11 students who were named JP Morgan Ph.D. Fellows last month as part of the company’s AI Research Awards program. Learn more about the 2022 class of fellows here.

Congratulations, Sewon!