The Allen School’s Gabriel Ilharco and Ashish Sharma are among 13 students across the U.S. and England to receive 2023 J.P. Morgan AI Ph.D. Fellowships. The fellowships are part of the J.P. Morgan AI Research Awards Program, which advances artificial intelligence (AI) research to solve real-world problems.
Ilharco, a fourth-year Ph.D. student in the Allen School’s H2Lab, is advised by professors Ali Farhadi and Hannaneh Hajishirzi. His research focuses on advancing large multimodal models as reliable foundations in AI.
“I believe the next-generation of models will push existing boundaries through more flexible interfaces,” Ilharco said. “There is much progress to be made towards that vision, both in training algorithms and model architectures, and in understanding how to design better datasets to train the models. I hope my research will continue to help in all of these directions.”
During his fellowship, Ilharco said he hopes to continue his work in building more reliable machine learning systems. While models such as GPT-4, Flamingo and CLIP have demonstrated impressive versatility across applications, there is still room for growth. In the past decade, machine learning systems have become highly capable, particularly when performing specific tasks such as recognizing objects in images or distilling a piece of text. Yet their abilities can advance further, Ilharco said, with the end goal being a single model that can be deployed across a wider range of applications.
To meet this challenge, Ilharco is targeting dataset design. A recent project, DataComp, acts as a benchmark for designing multimodal datasets. Ilharco was part of the research team that found smaller, more stringently filtered datasets can lead to models that generalize better than larger, noisier datasets. In their paper, the researchers discovered that the DataComp workflow led to better training sets overall.
Ilharco and his collaborators will host a workshop centered around DataComp at the International Conference on Computer Vision 2023 (ICCV23) in October.
“DataComp is designed to put research on datasets on rigorous empirical foundations, drawing attention to this understudied research area,” Ilharco said. “The goal is that it leads to the next generation of multimodal datasets.”
Another project introduced a framework for editing neural networks and appeared at the 11th International Conference on Learning Representations (ICLR 2023) this spring. Ilharco co-authored the paper that investigated how the behavior of a trained model could be influenced for the better using a technique called task arithmetic. In one example, the team showed how the model could produce less toxic generations when negating task vectors. Conversely, adding task vectors improved a model’s performance on multiple tasks simultaneously as well as on a single task. Ilharco and his collaborators also found that combining task vectors into task analogies boosted performance for domains or subpopulations in data-scarce environments.
Their findings allow users to more easily manipulate a model, expediting the editing process. Because the arithmetic operations over task vectors involve only adding or subtracting model weights, they’re more efficient to compute compared to alternatives. Additionally, they result in a single model of the same size, incurring no extra inference cost.
“We show how to control the behavior of a trained model — for example, making the model produce less toxic generations or learning a new task — by operating directly in the weight space of the model,” Ilharco said. “With this technique, editing models is simple, fast and effective.”
For Ilharco, the next wave of multimodal models is fast-approaching. He wants to be at the center of it.
“I hope to be a part of this journey,” he said.
Sharma, also a fourth-year Ph.D. student, is advised by professor Tim Althoff in the Allen School’s Behavioral Data Science Lab. He studies how AI can support mental health and well-being.
“I’m excited to be selected for this fellowship which will help me further my research on human-AI collaboration,” he said. “AI systems interacting with humans must accommodate human behaviors and preferences, and ensure mutual effectiveness and productivity. To this end, I am excited to pursue my efforts in making these systems more personalized.”
Sharma’s long-term goal focuses on developing AI systems that empower people in real-world tasks. His research includes work on AI to assist peer supporters to increase empathy in their communications with people seeking mental health support and exploring how AI can help users regulate negative emotions and intrusive thoughts.
Both put the user — the human being — at the center.
“Effectively supporting humans necessitates personalization,” Sharma said. “Current AI systems tend to provide generalized support, lacking the ability to deliver experiences tailored to the specific needs of end-users. There is a need to put increased emphasis on developing AI-based interventions that provide personalized experiences to support human well-being.”
Sharma’s work with mental health experts and computer scientists was among the earliest efforts to demonstrate how AI and natural language processing-based methods could provide real-time feedback to users in making their conversations more empathetic.
At The Web Conference 2021, he and his co-authors won a Best Paper Award for their work on PARTNER, a deep reinforcement learning agent that learns to edit text to increase “the empathy quotient” in a conversation. In testing PARTNER, they found that using the agent increased empathy by 20% overall and by 39% for those struggling to engage empathetically with their conversational partners.
“PARTNER learns to reverse-engineer empathy rewritings by initially automating the removal of empathic elements from text and subsequently reintroducing them,” Sharma said. “Also, it leverages rewards powered by a new automatic empathy measurement based on psychological theory.”
Earlier this year, Sharma was also lead author on a paper introducing HAILEY, an AI agent that facilitates increased empathy in online mental health support conversations. The agent assists peer supporters who are not trained therapists by providing timely feedback on how to express empathy more effectively in their responses to support seekers in a text-based chat. HAILEY built upon Sharma’s work with PARTNER.
In addition, Sharma and his collaborators recently won an Outstanding Paper Award at the 61st annual meeting of the Association for Computational Linguistics (ACL 2023) for developing a framework for incorporating cognitive reframing, a tested psychological technique, into language models to prompt users toward healthier thought processes. With cognitive reframing, a person can take a negative thought or emotion and see it through a different, more balanced perspective.
With a focus on people and process, Sharma sees how his research area can continue to grow. He said he hopes to advance AI’s ability to personalize to the user, while also remaining safe and secure.
“Utilizing my experience in designing and evaluating human-centered AI systems for well-being, I will investigate how such systems can learn from and adapt to people’s contexts over time,” Sharma said. “I’ve always been fascinated by technological efforts that support our lives and well-being.”