Skip to main content

‘Working to solve global problems’: Allen School Ph.D. student Akari Asai named one of MIT Technology Review’s Innovators Under 35 Japan

Portrait of Akari Asai
Akari Asai

Despite their growing potential and increasing popularity, large language models (LLMs) often produce responses that are factually inaccurate or nonsensical, also known as hallucinations. 

Allen School Ph.D. student Akari Asai has dedicated her research to tackling these problems with the use of retrieval-augmented language models, a new class of LLMs that pull relevant information from an external datastore with a query that the LLM generates. For being a pioneer in Artificial Intelligence & Robotics, Asai was recognized as one of MIT Technology Review’s 2024 Innovators Under 35 Japan. The award honors young innovators from Japan who are “working to solve global problems.”

“Being named to MIT Technology Review 35 Under 35 is an incredible honor,” said Asai, who is a member of the Allen School’s H2 Lab led by professor Hannaneh Hajishirzi. “It highlights the power of collaboration and the potential of AI to address real-world challenges. With the rapid adoption of LLMs, the need to investigate their limitations, develop more powerful models and apply them in safety-critical domains has never been more urgent. This recognition motivates me to keep working on projects that can make a meaningful impact.”

Asai’s paper on adaptive retrieval-augmented LMs was one of the first to show that retrieval-augmented generation (RAG) is effective at reducing hallucinations. Without using RAG, traditional LLMs generate responses to user input based on the information it was trained on. In contrast, RAG adds an information retrieval component to the LLM that uses the user input to first pull information from a new, external data source so the LLM can generate responses that incorporate the latest information without needing additional training data. 

As part of the study, Asai and her team compared the response accuracy from various conventional and retrieval-augmented language models across a dataset of 14,000 wide-ranging questions. They found that augmentation methods such as RAG significantly and more efficiently improved the performance of LMs compared to increasing their training data and scaling up models. Asai and her collaborators presented their work at the 61st Annual Meeting of the Association for Computational Linguistics (ACL 2023), where they received the Best Video Award.

Building off of that research, Asai has helped develop the foundational components of retrieval-augmented language models along with improved architectures, training strategies and inference techniques. In 2024, she introduced a new framework called Self-reflective RAG, or Self-RAG, that enhances a model’s quality and factual accuracy using retrieval and self-reflection. While RAG only retrieves relevant information once or a fixed number of time steps, Self-RAG can retrieve multiple times, making it useful for diverse downstream queries such as instruction following. The research received the best paper honorable mention at the 37th Annual Conference on Neural Information Processing Systems (NeurIPS) Instruction Workshop and was evaluated at the top 1% of papers at the 12th International Conference on Learning Representations (ICLR 2024). 

Asai is also interested in advancing the ways that retrieval-augmented language models can tackle real-world challenges. Recently, she launched Ai2 OpenScholar, a new model designed to help scientists more effectively and efficiently navigate and synthesize scientific literature. She has also explored how retrieval augmentation can help with code generation and helped develop frameworks that can improve information access especially among linguistic minorities. The latter includes Cross-lingual Open-Retrieval Answer Generation (CORA), Cross-lingual Open Retrieval Question Answering (XOR QA) and AfriQA, the first cross-lingual question answering dataset focused on African languages. In 2022, she received an IBM Ph.D. Fellowship to advance her work to ensure anyone can find what they need efficiently online including across multiple languages and domains

In future research, Asai aims to tackle other limitations that come with LLMs. Although LLMs are becoming increasingly useful in fields that require high precision, they still face challenges such as lack of attribution, Asai explained. She aims to collaborate with domain specialists “to create systems that experts can truly rely on.”

“I’m excited about the future of my research, where I aim to develop more efficient and reliable AI systems. My focus is on creating models that are not only scalable but also transparent, making AI more accessible and impactful across diverse fields,” Asai said.

Read more about the MIT Technology Review’s Innovators Under 35 Japan and Asai’s research.