Each year, Google recognizes approximately 75 exceptional graduate students from around the world through its Google Ph.D. Fellowship Program. The students, who come from a variety of backgrounds, are selected based on their potential to influence the future of technology through their research in computer science and related fields. As part of its 2023 class of Fellows, the company selected two future leaders from the Allen School: Miranda Wei in the Security and Privacy category and Mitchell Wortsman in Machine Learning.
Wei joined the Allen School in 2019 to work with professors Tadayoshi Kohno and Franziska Roesner, co-directors of the Security and Privacy Research Lab. Now in the fifth year of her Ph.D., Wei seeks to empower people and mitigate harms from emerging technologies through her dissertation research that explores how factors like gender affect people’s experiences with technological security and privacy. Her work has already contributed to an important new subfield that centers on sociotechnical factors in security and privacy research.
“Miranda’s focus on assessing conditions of disempowerment and empowerment, and then developing mechanisms to help users improve their computer security and privacy, is truly visionary,” said Kohno, who also serves as the Allen School’s Associate Director for Diversity, Equity, Inclusion and Access. “Miranda has not only identified important work to do, but she has identified a strategy and the key components for moving the whole field forward.”
Grounding her research in critical and feminist theories, Wei explores the (dis)empowerment of users through security and privacy measures in several contexts. Her work draws from the social sciences and multiple fields of computer science, including human-computer interaction and information and communication technologies for development in addition to security and privacy. In recent work, Wei has applied both quantitative and qualitative approaches, including case studies and participant interviews, to examine topics such as gender-based stereotypes and computer security and the connection between digital safety and online abuse.
“This research sets the foundation for learning from experiences of marginalization to understand broader sociotechnical systems,” explained Wei. “This enables equitable improvements to security and privacy for all online users.”
Wei’s academic journey began as an undergraduate student at the University of Chicago, where she earned a degree in political science with a minor in computer science. In addition to having published over a dozen peer-reviewed papers, Wei volunteers her time to support new and prospective graduate students through the Allen School’s Pre-Application Mentorship Service (PAMS) and Care Committee. She is also active with DUB as a student coordinator and participates in the University of Washington’s graduate application review process as an area chair.
“Miranda has great insight for research problems at the intersection of computer security and privacy and society, and she pursues this vision passionately and independently,” said Roesner. “At the same time, she is a wonderful collaborator and community member who looks out and advocates for others.”
Wortsman, who is also in his fifth year at the Allen School, earned a Google Ph.D. Fellowship for his work with professors Ali Farhadi, co-director of the Reasoning, AI and VisioN (RAIVN) Lab and CEO of the Allen Institute for AI, and Ludwig Schmidt, who is also a research scientist in the AllenNLP group at AI2. Wortsman has broad interest in large-scale machine learning spanning deep learning, from robust and accurate fine-tuning to stable and low-precision pre-training. His dissertation work seeks to improve large pre-trained neural networks as reliable foundations in machine learning.
“One of my main research goals is to develop computer vision models that are robust, meaning that their performance is less degraded by changes in the data distribution,” explained Wortsman. “This will enable the creation of models which are useful and reliable outside of their training distribution.”
With the progress in pre-training large-scale neural networks, machine learning practitioners in the not-so-distant future could potentially spend most of their time fine tuning these networks. Wortsman studies the loss landscape of large pretrained models and explores creative solutions for fine tuning with the goal of improving accuracy and robustness. Wortsman wants his models to be useful to society at large and not exclusively for academic and commercial applications. One of his ongoing projects includes a collaboration with the UW School of Medicine.
Wortsman is first author on over nine peer-reviewed publications, several of which he co-authored as a predoctoral young investigator at AI2, and collaborated on the development of an open source reproduction of OpenAI’s CLIP model. He has also served as a teaching assistant in the Allen School and as a reviewer for PAMS.
“Michell’s work has laid the foundations for many open models that let computers understand and generate images,” said Schmidt. “Mitchell is one of the core developers of OpenCLIP, which is downloaded several thousand times per day and has become part of many AI projects. Every time someone uses Stable Diffusion, one of Mitchell’s models provides the text guidance for the image generation process.”