When it comes to miniature robots, Allen School professor Vikram Iyer has big ideas.
“Imagine a fleet of tiny robotic sensors that automatically self-disperse to monitor crops on a farm,” Iyer said. “Others might routinely track inventory in a warehouse or inspect essential infrastructure.”
Before these so-called microrobots can autonomously cross that bridge, however, Iyer seeks to ditch the batteries — and add onboard perception and computation so they can navigate on their own — with the help of artificial intelligence. Meanwhile, his faculty colleague Adriana Schulz has designs on a different kind of power problem: how to use AI to supercharge a new era of creativity and eco-consciousness in computer-aided manufacturing.
“AI has been transformative in so many domains, from health care and e-commerce, to music and the visual arts,” Schulz said. “We haven’t yet seen the same progress in manufacturing, despite the potential to dramatically improve the process for turning ideas into tangible products that shape our daily lives.”
Schulz and Iyer earned National Science Foundation (NSF) CAREER Awards through the agency’s Faculty Early Career Development Program for research that promises to fundamentally alter the way we create and interact with objects and our environment.
Beyond batteries: A powerful new approach to building truly autonomous robotic sensors
The problem of how to untether robots from the limitations of battery power weighs heavily on Iyer’s mind — not least because batteries would add too much payload to a device roughly the size of a bug. In addition, a short battery life can limit the distance they move and the applications they can enable. And then there is the environmental impact, which also can’t be taken lightly.
“Aside from the costs of repeated recharging and replacement, which would be prohibitive and impractical, battery manufacturing and disposal have significant environmental impacts and require critical minerals,” he noted.
Iyer and his students have developed battery-less sensors that can run on solar energy or wireless power from radio signals; however, in real world conditions, the power available changes over time — making it challenging to fuel a moving robot. Using this NSF CAREER Award, his research group is exploring strategies like buffering small amounts of harvested energy in a capacitor allowing a robot to move in discrete steps.
“This motion is similar to how insects and birds perch between jumps or flights, which require bursts of energy,” Iyer explained. “This allows us to use power sources that provide small or variable amounts of energy.”
Iyer intends to design robots that incorporate this capability to travel on wheels, jump or do both. Whether via sun, signal or stop-and-go, battery-free power will only get microrobots so far; tasks such as measuring soil quality, detecting hazards or even taking readings in space will also require new kinds of onboard sensing, data processing and control that are optimized for intermittent power.
“We’re also exploring ways to use AI to help design and optimize both the software and hardware,” Iyer explained. “Beyond making single robots that can move on their own, we’re also interested in ways they can communicate with larger robots and AI systems that could coordinate a fleet of tiny robots to work together.”
Taking shape: A new era in AI-driven design and manufacturing grounded in geometry
Tiny robots are not the only objects that could see big gains with the help of AI. As Schulz sees it, we are on the cusp of an exciting new era of computer-aided design and manufacturing just about anything, from couches to cars. To unlock this potential, she aims to leverage advances in program synthesis, large language models and other innovations to assist makers in squaring the need for precision with the urge to experiment.
“Engineering design is a paradoxical process. On the one hand, it demands exactness and the ability to incorporate fine-grained information — which also makes it complex and time-consuming,” Schulz explained. “On the other hand, design tends to thrive on an iterative and exploratory spirit.”
Schulz will apply her NSF CAREER Award to solving that paradox through the development of a new, geometry-based system dubbed SmartCAD that combines formal reasoning techniques with neural abstractions. The new system will be based on ShapeScript, a new domain-specific language capable of interpreting the geometry of new designs — meaning users will no longer need to have their intended shapes in mind at the start of the design process, as they do with conventional CAD systems. She also plans to integrate foundational code generation models, such as GPT, that have been trained on geometric knowledge to support multi-modal user input through text, sketches or images, in addition to enabling users to make structural changes as they edit and optimize their designs.
“Formal methods offer verifiability and synthesis that adheres to constraints, which is crucial for the precision and analytical reasoning inherent in engineering,” Schulz noted. “By combining these methods with recent developments in AI, we can better facilitate and automate design generation and iteration.”
This iterative capability could save not only manufacturing costs but potentially the planet, by allowing users to optimize their designs to make maximum use of materials while minimizing waste. Such tools could also democratize the design and fabrication process by empowering both novice and experienced fabricators alike.
“Our approach has the potential to completely upend the traditional design process to empower a new generation of fabricators,” said Schulz. “And since virtually every object in the world originated as a CAD model, even small advancements can have a profound impact.”
Earlier this year, the Computing Research Association honored a select group of undergraduate students from around the country who have made notable contributions to the field through research. The CRA Outstanding Undergraduate Researcher Awards competition historically has been good to Allen School students. To the four most recent honorees — award winner Kianna Bolante, finalists Claris Winston and Andre Ye, and honorable mention recipient Nuria Alina Chandra — even more rewarding than national recognition is realizing the impact their contributions can have on individuals and communities in Washington and beyond.
Kianna Bolante: Empowering students to think critically about technology
As a member of multiple groups underrepresented in computing, CRA award winner Kianna Bolante is always on the lookout for ways to blend her curiosity about technology with a desire to create positive social change.
“This mindset has guided me towards opportunities aligning with my values,” she said.
In addition to multiple service and leadership roles on campus, those opportunities have included research projects spanning accessibility, robotics and social computing. Bolante’s interest in the latter led her last year to pursue what is arguably her most high-profile work to date: a comprehensive social computing curriculum aimed at guiding middle and high school students to consider the different ways they and others interact with technology. Her interest in the project was inspired in part by her realization that, although students spend increasing amounts of their time online, most secondary education curricula do not prepare them to engage meaningfully on the topic of how people interact with digital technologies.
“A community’s core comes from how its individuals interact with each other, and social computing systems are a vital network affecting how humans build community daily — even more so due to the COVID-19 pandemic,” observed Bolante. “Yet, despite its pervasive influence, social computing topics are not explicitly taught in secondary-level courses in the U.S.”
Keen to bridge that gap, Bolante teamed up with professor Amy Zhang and members of the Allen School’s Social Futures Lab to develop six educational modules that teachers could incorporate, in whole or in part, into their classroom lesson plans. The modules, which covered topics spanning online behavior, machine learning and bias, misinformation and more, offered a combination of lecture content, hands-on activities and resources to support in-class discussion as well as personal reflection. After creating the curriculum, Bolante proceeded to travel around to Seattle-area schools, where she shared the lesson content with more than 1,400 students and their teachers.
“That she has impacted so many young people is not only a show of her strong ability to lead and communicate but also the dedication she brings to educating young people,” Zhang said. “Kianna took modest expectations for the project and blew them out of the water!”
“This area of study affects us all through our constant use of collaborative systems and social technologies, such as social networking platforms, but we don’t all experience them all the same way,” Bolante noted. “We incorporated culturally responsive pedagogy in our design to help students connect their learning to their identities. A constant theme is to encourage students to think critically on the positives and negatives of social technology designs.”
Zhang and her Allen School colleague Maya Cakmak were delighted, but not surprised, that the CRA chose to recognize Bolante for her contributions with an Outstanding Undergraduate Researcher Award. Bolante had enrolled in Cakmak’s introductory research seminar as a sophomore, following an initial foray into research via a project analyzing language preferences in relation to disability and her work with Zhang through the DUB REU program during her first year at UW. Although the seminar is designed for those with no previous research experience — and Bolante already had a published paper to her name, after the aforementioned analysis appeared at the International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘22) — she hoped to hone her independent research skills and connect with other like-minded students.
Bolante subsequently returned the favor by serving as a teaching assistant in the next offering of the course, sharing her knowledge and experience. In addition, she has participated in extensive outreach activities as a member of various student groups on campus in addition to her continued engagement with middle and high school students. She is also blending her interests in education pedagogy, social computing and accessibility in her work with another Allen School professor, Kevin Lin, to incorporate content that reflects a functional understanding of different physical, sensory and cognitive abilities into the school’s own Data Structures & Algorithms course. Her many contributions inspired her inclusion in the 2024 class of the Husky 100, a designation that recognizes students who are making the most of their time at UW.
“Kianna has contributed to so many research projects. She is a quick learner and resourceful in solving problems and adapting to the needs of each project. She is also a clear communicator who truly cares about her audience,” said Cakmak. “It is no wonder she has given so many talks and is asked to be on so many panels — and she never passes on an opportunity to use her voice to empower others.”
Nuria Alina Chandra: Decoding the genetic basis of disease
Ever since she first set foot on campus, Nuria Alina Chandra wanted to try her hand at research into the mechanisms of disease. But the difficulty she initially encountered trying to break into a lab — figuratively speaking, of course — galvanized her interest in giving other like-minded students a leg up.
“I originally struggled to find research opportunities in college, so once I found a position, I wanted to help others to do the same,” said Chandra. “As an Undergraduate Research Leader, I connect first-year students from underrepresented backgrounds with research opportunities and advise them on how and why to launch a research career.”
Chandra’s own research career has followed some twists and turns on the way to earning a CRA honorable mention for her contributions. As it turned out, her willingness to take an indirect route appealed to professor Sara Mostafavi in the Allen School’s Computational Biology group.
“Nuria approached me to ask about possible research internships,” recalled Mostafavi. “I was immediately impressed by her strong academic foundation as well as her diverse research experience.”
That diverse experience predated her arrival at the Allen School, when Chandra spent nearly two years working in the Bacteriophage Lab at The Evergreen State College. She then spent a summer as a research assistant in the Subramanian Lab at the Institute for Systems Biology before joining Seattle Children’s Pediatric Sleep & Pain Innovations Lab, where she investigated the development of posttraumatic and postsurgical pain.
In Mostafavi’s lab here at the UW, Chandra uses machine learning to analyze the role of regulatory DNA in cell-type specific gene expression to understand the downstream effects of DNA mutations associated with hereditary diseases such as type 1 diabetes and multiple sclerosis.
“Regulatory regions of the genome are difficult to study experimentally due to their cell-type-specific effects,” Chandra explained. “The ability to accurately predict the transcriptional consequences of regulatory variants is critical to decoding the genetic basis of disease and facilitating the development of personalized treatment systems.”
Previous research suggested that information about the distribution of chromatin accessibility, which regulates gene expression, at the level of DNA base pairs could offer useful insights into regulatory protein behaviors. Such insights would ostensibly improve deep learning models’ predictions about chromatin accessibility across cell types. When Chandra set out to test this hypothesis, however, she found that existing deep learning models like BPNet could only accurately predict the accessibility profiles of a subset of regulatory DNA; they weren’t up to the task of predicting differential accessibility for roughly 90 closely related immune cell types. So Chandra developed a new, more robust convolutional model, bpAI-TAC, that is capable of harvesting data from base-pair resolution accessibility profiles to improve regional accessibility predictions.
After demonstrating that bpAI-TAC outperforms the previous state of the art on regional chromatin accessibility prediction, Chandra has begun to explore techniques for extracting additional biological insights into the mechanisms of disease. In the future, researchers will be able to use Chandra’s model to catalog the effects of genomic variants in order to better understand how they contribute to disease and to assist with targeted drug development.
“I typically don’t assign such a challenging project to an undergraduate, but given Nuria’s enthusiasm and performance on her course work, I took a chance,” Mostafavi said. “And she exceeded my expectations! I’ve been very impressed with Nuria’s dedication, analytical abilities and approach to solving complex problems.”
Claris Winston: Creating technology that removes societal barriers
CRA finalist Claris Winston’s interest in research is not just academic; it’s personal. Based on her own experience, Winston began to appreciate firsthand the extent to which societal barriers and inaccessible environments challenge people with disabilities. The realization motivated her to develop a digital assistant called MyScoliCare that has helped patients and therapists around the world manage their treatment of the condition.
“This experience emboldened me to develop impactful, accessible technology during my time as an undergrad,” Winston said. “As my awareness has grown, so has my passion to create technology that improves accessibility for all and removes societal barriers for people with a variety of needs.”
Before pursuing that passion, she got her feet wet doing research in a wet lab — specifically the Molecular Information Systems Lab, where she was lead author of a paper describing combinatorial polymerase chain reaction, a novel technique for selective retrieval of DNA oligo pools to enable DNA-based data storage at scale. With a published paper to her credit, Winston then teamed up with siblings Cailin, Caleb, Chloe and Cleah on a project that applied software engineering techniques to detect and repair faults in brain-computer interfaces. BCIs are implantable devices that augment or restore sensorimotor function in people with neurological disease or spinal cord injury.
“Claris knows how to seek out interesting research opportunities and is driven to engage in projects that are meaningful,” Allen School professor Jennifer Mankoff said.
One might even say Winston has a magic touch when it comes to computing research. In the fall of 2022, she joined Mankoff’s Make4All Group to build upon a pilot project that incorporated embroidery textures into scalable vector graphics. The goal was to develop a pipeline for producing embroidered tactile graphics that make visual media accessible to people who are blind or visually impaired (BVI).
“Common formats for tactile graphics degrade over time,” Winston explained. “Since they are made of fabric, embroidered graphics are more durable, can be washed, can be sent to low-resource regions, and may be used by different people over the years.”
Winston spearheaded multiple aspects of the pipeline, from ensuring the optimization algorithm would produce sufficient contrast between textures, to extending its capabilities to incorporate lines and points, to the actual printing and post-processing of the embroidered graphics. The latter required her to troubleshoot the process for printing on satin fabric, which is prone to wrinkles and slippage, in addition to developing a reliable approach for stitching on both sides to make it braille readable. Winston also took the lead in recruiting, running and analyzing data from user studies with BVI individuals.
Her ability to drive the project toward a successful outcome, including a paper submitted to the journal Transactions on Accessible Computing (TACCESS), was even more remarkable considering the circumstances. Shortly after Winston joined the lab, Mankoff was compelled to take time off for a family obligation. Her mentor’s unexpected absence didn’t faze Winston, who excelled despite minimal supervision. She also happily extended her involvement beyond the expected end date she had originally been given to see the project through, culminating in the TACCESS paper submission, and is currently working towards a submission that, if accepted, would be her fourth published paper in as many years. Winston is also collaborating with other researchers to build a value based healthcare system that uses large language models. Her work on that project was accepted to a workshop at the IEEE International Conference on Healthcare Informatics (ICHI 2024).
In the past, Winston has worked on research at the intersection of computing and biology. As a research assistant in the Matsen Group at the Fred Hutchinson Cancer Research Center, she contributed to the development of methods for efficiently indexing COVID-19 phylogenetic trees using annotated directed acyclic graphs for the purposes of tracking variants of the virus. The project is yet another example of Winston’s maturity and commitment to real-world impact — an impact that earned her an Outstanding Senior Award at the Allen School’s recent graduation celebration.
“Claris already functions more like a graduate student than an undergraduate and is destined to grow into a strong independent researcher,” Mankoff said. “And with her commitment to service, she is also someone who will contribute to the ongoing work of making the field of computer science more inclusive.”
Andre Ye: Getting philosophical to build better models
CRA finalist Andre Ye is philosophical in his approach to machine learning research. This can be taken literally as well as figuratively; by pursuing degrees in both computer science and philosophy, he aims to explore the intersection of machine learning and topics such as human subjectivity and critical thinking.
“I believe building stronger connections between these two areas is essential to building more robust and usable models,” Ye said.
His first project uniting the two sought to account for human uncertainty in the annotation of medical images for training computer vision models intended to assist with clinical decision-making — a high-stakes task that affects the quality of a model’s downstream predictions and, potentially, patient care.
But first, he had to convince Allen School professor Amy X. Zhang to allow him to pursue the project in her Social Futures Lab. It helped that he had read a recent paper from the lab suggesting a new approach for capturing calibrated uncertainty, which dovetailed nicely with his proposal inspired by a previous stint segmenting kidney tissue images at UW Medicine.
“Normally, I might not give such a young student so much free rein to start with, but Andre quickly showed his ability to independently make progress on research,” said Zhang. “And since neither I nor my Ph.D. student who helped mentor him had any prior experience with medical data, we really leaned on and learned from Andre, given his past work in medical imaging.”
Ye took the reins and ran with them. While conventional approaches attempt to compensate for human subjectivity by incorporating an uncertainty distribution drawn from existing samples, the results are difficult to interpret and don’t necessarily indicate clinical significance. As an alternative, Ye spearheaded the development of Confidence Contours, a novel framework for developing annotations that explicitly account for human-provided uncertainty across a range of possible segmentations that can be used to train any general-purpose segmentation model.
“Rather than providing a singular segmentation, annotators annotate both a region of high confidence and additional areas of lower confidence,” Ye explained. “By leveraging human subjectivity instead of working around it, our approach produces models that are more useful to clinicians than those that rely on standard annotations.”
His efforts were rewarded with an honorable mention at last year’s Conference on Human Computation and Crowdsourcing (HCOMP ‘23). Ye subsequently embarked on another project advised by Zhang, along with Allen School professor Ranjay Krishna, examining how variations in visual perception as a product of different linguistic and cultural backgrounds influence the output of image captioning models. In a paper submitted to the International Conference on Learning Representations (ICLR 2024), the team described how multilingual annotations convey more varied and comprehensive information about the objects, their relations and attributes depicted in an image compared to monolingual ones. The analysis showed how even supposedly objective “ground truth” is, in reality, shaped by human subjectivity.
Ye’s latest work explores whether language models, which can assist people with accelerating or even automating rote cognitive tasks, can also be effective tools for deeper thinking. Based on interviews with philosophers at academic institutions across the United States, Ye conceived of the selfhood-initiative model, a framework for defining critical thinking tools that can serve as a basis for designing language models that can be used to help human ideas take shape.
He intends to continue his investigation into the intersection between machine learning and philosophy as a graduate student; in the opinion of his mentors, Zhang and Krishna, he is already operating at that level.
“Andre is an astonishing young scholar in every sense,” said Krishna. “It would be hard to find another student who matches his level of curiosity, creativity and ambition.”
In addition to Ye and his peers, another undergraduate researcher with an Allen School connection, Thanh Dang, earned accolades from the CRA this year. Danh, who is studying computer science and mathematics at Colgate University, received an honorable mention in part for her work with Allen School professor Michael Ernst on a summer project in which she developed an evaluation infrastructure for comparing the performance of commit untangling tools.
For Allen School majors who are interested in pursuing research as part of their Husky experience, CSE 390R Introduction to Research in Computer Science & Engineering offers the opportunity to gain hands-on experience with typical research responsibilities before seeking positions with labs or external research organizations. The next offering of the course, in autumn 2024, will be led by professor Leilani Battle.
Visit the CRA website to learn more about the Outstanding Undergraduate Researcher Awards program.
Hakim Weatherspoon (B.S., ‘99) was a rare breed of computer engineering student. As a Husky football player — and one who was named to the Pac-10 All-Academic team, at that — Weatherspoon’s grit and determination on the gridiron was matched by his grit and determination in the classroom. A quarter century after he last took the field for the University of Washington, Weatherspoon scored a College of Engineering Diamond Award from his alma mater for “embracing the power of diversity, equity and inclusion” in his role as Associate Dean for Diversity, Equity and Inclusion in the Cornell Bowers College of Computing and Information Science.
Football and computing may appear to be an unlikely combination, but to Weatherspoon, his journey from student-athlete to computer science professor and college leader is proof that there is a place for everyone when it comes to the field that ultimately defined his career — even if it wasn’t the one he originally imagined for himself.
“Funny thing is, I actually wanted to be an electrical engineer, like my brother and uncle,” Weatherspoon said with a laugh.
As fate would have it, he had enrolled in the Allen School’s introductory programming course — his first exposure to programming — and was doing “pretty well,” he said. So well, in fact, that a counselor in the UW’s Minority Science & Engineering Program (MSEP), which served as a bridge program for aspiring engineering students from underrepresented groups, urged him to apply to computer engineering. At the time, Weatherspoon was skeptical, but he had not yet completed the prerequisites for electrical engineering when he was accepted into the Allen School.
He decided to run with it.
“It really clicked with me, and I enjoyed it a lot. And it made a lot of sense,” Weatherspoon recalled of his first foray into computing. “It was one of those things where you didn’t grow up with programming experience, but you had exposure in college and really liked it and excelled at it.”
His experience in the MSEP not only helped him find his way to computing, but it also helped shape his thinking about DEI as a faculty member.
“The bridge program really changed my life in a lot of ways,” Weatherspoon said. “The gap between high school and college is significant. The MSEP taught us how to behave — where to sit in class, how to actually study. It got you ahead in classes like calculus and chemistry by slowing things down and allowing you to see things for a second time.”
It also, crucially, created a sense of community.
“You have a marginalized or underrepresented population that can feel isolated. If you help offset that, they can do well,” Weatherspoon said.
Professor Ed Lazowska, the Bill & Melinda Gates Chair Emeritus in Computer Science & Engineering at the Allen School, noted that Weatherspoon’s record of leadership on DEI makes him an extraordinary role model — and his influence extends far beyond Cornell.
“Hakim is one-of-a-kind,” said Lazowska, who chaired what was then the Department of Computer Science & Engineering when Weatherspoon was a student. “A true student-athlete. A top-tier computer engineer on the faculty of one of the nation’s top programs. An entrepreneur. A proud Husky. And a person who has changed the landscape for traditionally marginalized individuals in the field through his leadership, mentorship and service — and through the example that he sets.”
Weatherspoon has set that example by developing a range of programs to support students at all stages of their academic journey — and offered a solid game plan for others to follow. After noticing how many students struggled in their second year, Weatherspoon co-founded an immersive summer program at Cornell, inspired by his experience in the MSEP, to prepare rising sophomores for upper-level computer science coursework. The program, known as CSMore, offers participants an opportunity to develop research skills and explore career pathways in academia and in industry. It also gives students an opportunity to build relationships with their professors as well as with each other. An extension program, CSMore Works, provides program alumni with even more opportunities to engage as a cohort with faculty, industry experts and other CSMore alumni.
“Sophomore-level courses at Cornell are really gateway courses — they are required to get into the major, and you have to get a certain GPA,” Weatherspoon explained. “I teach one of those courses and noticed that we were losing a lot of underrepresented minority students. My colleagues noticed the same, so we said, let’s do what we do for pre-freshmen, but as a bridge into sophomore year.”
For students who are interested in exploring research as a career, either in industry or academia, there’s the SoNIC program. This week-long summer research workshop is designed for students enrolled in an undergraduate or master’s degree program in science, technology, engineering or mathematics (STEM) in the United States and Puerto Rico. For those students who have already decided to jump into academic research, there’s Engage LEAP, which supports Cornell Ph.D. students’ academic, professional and personal growth through a variety of resources and activities, from dissertation workshops to networking opportunities. Weatherspoon is an advocate in the LEAP Alliance, which is a nationwide network of 30 Ph.D. programs — including the Allen School — working to diversify leadership in the professoriate.
According to Weatherspoon, the initiative has succeeded in changing the face of Cornell’s Ph.D. program. “We went from essentially zero to nearly 10% underrepresented minority students,” he said.
But Weatherspoon knows that diversifying leadership in the field requires reaching students before they even set foot on a college campus — perhaps even before they dare to dream of a career in computer science. Recognizing that such opportunities are not evenly distributed, particularly when it comes to low-resource communities, he co-organized Code Afrique with a mission “to give African students a window into the world of computer science and its vast potential for development in this era of technology.”
Code Afrique engages students from partner high schools in learning about the latest developments in computer science. The program includes interactive coding workshops culminating in a competitive hackathon along with mentorship sessions led by volunteers from Cornell, MIT, Google and more. Code Afrique has reached as many as 500 students in countries including Ghana, Eswatini and Nigeria. According to Weatherspoon, they’re not the only ones whose lives are changed by the experience.
“It has a deep impact on the volunteers,” he said. “The people who go there — the Cornell students who participate — are often not from those countries. So it affects their thinking and sometimes their career path, as well.”
Whether in a high school classroom half a world away or a college computer science lab close to home, Weatherspoon’s efforts to transform students’ lives — and the entire field of computing — continues to be an inspiration to those who know and work with him.
“He’s a very genuine person, and he cares deeply,” said Kavita Bala, Dean of the Cornell Bowers College of Computing and Information Science. ”He’s an optimist, and he brings this positive energy to effect change in a way that not everybody has.”
For students looking to follow in Weatherspoon’s footsteps in computer science, he’s clearly mastered the art of the assist.
“In a lot of places, he’s been the first,” said Ayanna Howard, Dean of Engineering at The Ohio State University and a frequent collaborator of Weatherpoon’s. “He ensures that he provides and builds up an environment where he is not the last.”
“Tonight, we celebrate the hard work and accomplishments of our graduates. Tomorrow, new adventures await.”
With that acknowledgment of the Class of 2024’s trials and triumphs, professor Magdalena Balazinska, director of the Allen School, welcomed a crowd of roughly 5,000 to the school’s graduation celebration. The Allen School expects to award more than 770 degrees in total during the 2023-2024 academic cycle; around 650 of the recipients assembled in the Hec Edmundson Pavilion at the Alaska Airlines Arena on June 7 to mark this life-changing milestone in the presence of their families and friends.
‘You write your own story’: Life advice from Andy Jassy
Among those celebrating the graduates’ achievements was Andy Jassy, President and CEO of Amazon and — for this evening, at least — honorary member of the Dawg Pack. Jassy took the stage as the Allen School’s 2024 graduation speaker to share with graduates the most important lessons he had learned over the course of his career, from his early aspirations of being the next Howard Cosell, to his leadership of Amazon Web Services to its position as the undisputed king of the cloud, in the hopes of inspiring them to each be the author of their own story.
Those early aspirations of sportscasting glory were on Jassy’s mind as he entered the arena. Describing the Allen School as “one of the very best engineering schools in the world,” he noted that the graduates had already achieved a remarkable feat. As they begin their next chapter, they could take both inspiration and some hard-earned lessons from Jassy’s own story — or, as he put it, “what I wish I knew when I was 22.”
First lesson to his younger self, and by extension, the newly-minted graduates? “I am not going to be a famous sportscaster,” he acknowledged, accepting that what he thought he wanted to do upon first graduating from college was not, in fact, destined to be his life’s work.
Should the graduates come to a similar realization, they needn’t get discouraged. As Jassy shared in his next lesson, “I’m going to pursue a lot of different jobs” — be it jobs for which he interviewed but didn’t receive an offer, or ones that he tried for some time but decided he didn’t want to spend his life doing. Over the course of his career, Jassy attempted a range of professions, from investment banking and consulting, to selling golf clubs and coaching a high school soccer team, on his way to finding his true calling at Amazon.
For the graduates, then, figuring out what they don’t want to do for a career will be as important as figuring out what they do want to do. While they’re at it, Jassy suggested, they should keep an open mind.
“Life is an adventure. It takes a lot of unpredictable twists and turns,” he said. “You meet people who influence you along the way; you’ll find yourself surprised by what inspires you that you would not have guessed. There are more interesting opportunities to make a difference than you probably realize. Be open to what’s out there.”
His third lesson? “Don’t let others tell you who you are,” Jassy advised, recalling his pre-kindergarten teacher’s assessment that he would never be an athlete because he struggled to hop on one leg. As it turned out, his performance at the tender age of five did not prevent him from going on to play multiple sports in high school and college. What is easy to ignore when we are young becomes harder to ignore as we get older, he acknowledged; and yet, the graduates should not allow the judgments of “uninformed people” to define them.
“Nobody writes your book for you,” Jassy insisted. “You write it.”
And as they do, they should remember that not every presentation or meeting equates to a pass/fail referendum on their competence — despite his own early preoccupation to the contrary. Jassy noted that his biggest regrets were not the occasions that he failed, which he referred to as his “proudest scars,” but rather the times he didn’t take a risk in the first place.
“There is no person in the world who performs perfectly, or has it right 100% of the time, or whose ideas are coherent or sensible every time. That’s not reality,” he said. “It is, however, a sure bet that you will never do something needle-moving if you don’t put yourself out there and take a shot.”
While there are many things the graduates won’t be able to control, Jassy pointed out that one thing they will always be in control of is their attitude. And although members of the Class of 2024 may be marking the official end to their time at the university, they should not regard their student days as being completely behind them.
“Be a willing and ravenous learner,” he urged. “Believe me, life is much more fun and rewarding when you’re learning.”
In closing, Jassy noted that the Class of 2024 graduates will have many options, and this next chapter represents just one of them.
“Remember that you write your own story,” he said.
Alumni Impact Awards: Leading by example
Continuing an annual tradition, Ed Lazowska, Professor, and Bill & Melinda Gates Chair Emeritus, in the Allen School, ascended the stage to announce the recipients of the Alumni Impact Awards. The awards reinforce for the next generation of alumni how an Allen School education can lead to real-world impact.
John Colleran (B.S., ‘87)
John Colleran barely had time to catch his breath following his graduation from UW in 1987; two days later, he was starting his new job at Microsoft. There, he has spent 37 years driving engineering investment and innovation in successive versions of the Windows operating system before stepping into his current role leading the company’s Developer Productivity team for the Windows and Azure Engineering Systems group. There, he has led the creation of the WAVE engineering productivity tools in addition to the development of industry-leading methods for measuring the impact of various tools and practices on developer productivity.
As the spouse of another UW undergraduate alum and proud father of a student set to walk across the stage that very same evening to collect her own Allen School degree, Colleran is a Husky through and through.
“John has a long list of engineering accomplishments in the systems arena that have directly contributed to Windows’ dominance in both the business and consumer spaces,” said Lazowska. “John is also a good friend and a good person.”
Karen Liu (Ph.D., ‘05)
Since completing her degree working with professor Zoran Popović in the Allen School’s Graphics & Imaging Laboratory (GRAIL), Ph.D. recipient Karen Liu has made a series of fundamental contributions in computer graphics and robotics spanning physics-based animation, reinforcement learning, optimal control and more.
Liu launched her faculty career at the University of Southern California before she was recruited away by Georgia Tech and, later, Stanford University, where she currently directs The Movement Lab. Liu focuses on the development of algorithms and software that enable digital agents and physical robots to interact with the world through intelligent and natural movements, drawing upon principles from computer science, mechanical engineering, biomechanics, neuroscience and biology.
“It’s exciting, high-impact, interdisciplinary work for which she has been widely recognized,” said Lazowska, alluding to a litany of honors that includes a TR-35 Award, Sloan Research Fellowship and ACM SIGGRAPH Significant New Researcher Award. “Karen, like John, is building on her Allen School education to change the world.”
Student Awards: Recognizing scholarship, leadership and service
Before they go out and use their education to make a difference in the world, many Allen School students find ways to make a difference to the campus community through their scholarship, leadership and service. Each year, the school recognizes a subset of these students for going above and beyond in supporting their fellow students, advancing the field through research and contributing to a vibrant and inclusive school community.
Undergraduate Service Awards
In her role as chair of the student group Computing Community (COM2), honoree Vidisha Gupta stood out “as a phenomenal leader who has worked tirelessly to improve the Allen School experience for students.” Gupta’s contributions included organizing large school-wide events aimed at building community and fostering a sense of connection, whether virtual or in-person. Gupta also represented undergraduate students on the school’s Diversity Committee and during the hiring process for teaching faculty.
Meanwhile, award recipient Lee Janzen-Morel was a driving force behind the creation of the Diversity & Access Lounge, a space for students from underrepresented groups in computing to find community and share experiences. “In the two years since it opened, this space has positively impacted many students and will continue to have an impact in the years to come.” Janzen-Morel also provided extensive support to Ability, the student group focused on promoting accessibility at the Allen School.
During her time at the Allen School, Kristy Nhan “has done immense work to help improve the Allen School experience for students of color, first-generation students, and women in computing.” Nhan’s impact was also felt through her service as a lead CSE Ambassador performing outreach to local K-12 students. In addition, she was a volunteer leader with student groups GEN1 and Women in Computing, for which she developed a new internship program and coordinated high school visits for young women of color, respectively.
Olivia Wang approached their role as a peer adviser ”with passion, attention to detail, and kindness.” Wang was a peer adviser for two years, during which time they assisted current and prospective students in navigating the undergraduate experience and connecting with academic resources. Wang also was the first-ever peer adviser focused on undergraduate research, organizing events such as the “Getting into Research” workshop and spring research showcase to make opportunities in the Allen School’s labs more accessible to students.
Outstanding Senior Awards
Recipient Grace Brigham, who earned both her bachelor’s and master’s degrees in the Allen School’s B.S./M.S. program, was recognized for her research examining the use of artificial intelligence to generate non-consensual intimate imagery and the impact of AI bias on humans — the same project for which she earned a Best Master’s Thesis Award. In addition, Brigham earned accolades for her service as a teaching assistant for the direct admit seminar for new freshmen and her contributions as a mentor for Changemakers in Computing.
Fellow award winner Heer Patel was likewise recognized for research excellence — in this case, her work in data visualization that explored the application of AI to generate educational materials for data science students. In fact, Patel’s leadership, hard work, and dedication led to the submission of a paper on the subject, for which she is first co-author. Patel will remain in the Allen School to pursue her master’s degree as part of the B.S./M.S. program.
Matthew Shang was singled out for his “mathematical prowess” and his contributions to research in chaotic systems and probabilistic programming techniques for analyzing errors in laboratory procedures — the latter in collaboration with members of the Department of Electrical & Computer Engineering. Another undergraduate who is enrolling directly in the B.S./M.S. program, Shang has already completed an impressive amount of graduate-level coursework.
Claris Winston was honored for her research into embroidered tactile graphics to support individuals who are blind or visually impaired, a project for which she was lead author of a journal submission to Transactions on Accessible Computing. She also was a finalist in the Computing Research Association’s Outstanding Undergraduate Researcher Awards competition. She, too, will pursue her master’s degree at the Allen School starting this fall.
Thesis Awards
Professor Maya Cakmak, who chairs the Allen School’s Undergraduate Research Committee, presented the research thesis awards. Noting the school’s dual mission to both educate students and push the boundaries of computing via research, Cakmak reminded the audience that it’s not just faculty and Ph.D. students doing the latter.
“There are also opportunities for undergraduate and masters students to get involved in research labs, learn about the research process, and make their own contributions to the field,” she said.
Kavel Rao earned the Best Senior Thesis Award for “What Makes it Ok to Set a Fire? Iterative Self-Distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations,” completed under the supervision of professor Yejin Choi in the Allen School’s Natural Language Processing group. In the paper — which was published at last year’s Conference on Empirical Methods in Natural Language Processing (EMNLP 2023) — Rao put forward new methods for emphasizing contextual information and commonsense reasoning in models tasked with making moral decisions.
Grace Brigham, who also took home an Outstanding Senior Award, was honored with the Outstanding Master’s Thesis Award for “Violation of my body: Perceptions of non-consensual (intimate) imagery.” That work, which Brigham completed under the supervision of professor Tadayoshi Kohno in the Security and Privacy Research Lab, provided new insights into people’s perceptions of AI-generated non-consensual imagery that are already being used to inform national and international conversations about how to mitigate harms associated with such use of AI. The paper was accepted to the 20th Symposium on Usable Privacy and Security (SOUPS 2024).
Teaching Awards: Honoring those who inspire
Bob Bandes Memorial Awards for Excellence in Teaching
Each year, the Allen School honors the students who take on the role of teaching assistant, or TAs, who wear many hats: course administrator, supplemental instructor, grader, and cheerleader. In all, the school received more than 800 nominations for roughly 260 individual TAs for this year’s Bandes Awards; of those, the school selected four winners and three honorable mentions presented by teaching professors
Award winner Anjali Agarwal served as a TA nine times, including for the Allen School’s introductory programming courses as well as Data Structures and Parallelism and Foundations of Computing. She also was an instructor for the latter as well as the school’s Mathematics for Computation Workshop. One student summed up the nominators’ assessment by saying, “What stands out most about Anjali is her kindness, how she approaches each student with understanding and without judgment to meet them where they are at and bring them along.” Agarwal will start this fall as teaching faculty at Northwestern University.
Jaela Field earned an award for her nine quarters of service as a TA for Software Design & Implementation. Saying that Field knows the content inside-out, one nominator enthused “Saying that Jaela goes above and beyond is an understatement. She goes above, beyond, out of this world, around it, and then a couple more times around it…She makes the impossible possible through her passion to help others learn.” The instructors she has worked with were similarly effusive about her dedication and willingness to jump in and help with all aspects of a course, citing her combination of technical and people skills.
Winner Jasmine Chi is a seven-time TA who has assisted with the Allen School’s revamped introductory series as well as the freshman direct admit seminar. In addition to her enthusiasm, patience and reliability, students appreciated how Chi “ensures that every student that attends her lectures feels seen” while prioritizing their understanding of the concepts covered in the course. An instructor applauded her leadership on course infrastructure, including her management of assignment development and grading, while keeping up with course communication and supporting her fellow TAs — qualities that made her “mission critical” to the course.
Award recipient Joe Spaniac served as a TA for an impressive 15 quarters, including multiple offerings of the introductory series, and as an instructor for one course — with a second to come this summer. One of the instructors with whom he worked raved that Spaniac was the most effective TA they had ever worked with, noting that “most issues that cropped up during the quarter were never seen by the instructors because Joe got there first.” Spaniac’s students particularly appreciated how he made them feel sufficiently comfortable, even encouraged, to ask the so-called dumb questions and get the support they needed.
Honorable mentions went to Hannah Lee, who has TAed a total of six times for multiple offerings of the undergraduate Machine Learning and the graduate-level Machine Learning for Neuroscience courses; Yuxuan Mei, a TA for three different courses — including Computational Design & Fabrication and Intermediate Data Programming, for which Mei will be an instructor this summer; and Zhi Yang Lim, who served as a TA for four quarters of Foundations of Computing II, including one as lead TA.
COM2 Teaching Awards
While graduation is a time for celebrating students, each year the student group Computing Community, or COM2, turns the spotlight back on the faculty who, in chair Kianna Bolante’s words, “inspired us, challenged us, and shaped our paths” through its Undergraduate Teaching Awards.
The first honoree, teaching professor Matt Wang, earned accolades for being an “incredible educator who has made a remarkable impact in just his first year in the Allen School” as an instructor in the introductory programming series and System and Software Tools. Bolante highlighted in particular how Wang comes up with creative ways to engage students through interactive demonstrations and relatable examples, as well as his ability to create a welcoming classroom environment where everyone feels included and valued.
“His talent for simplifying complex concepts, and eagerness to offer additional assistance and encouragement, demonstrates his unwavering commitment to his students’ growth and achievement,” she said.
The second recipient, teaching professor Miya Natsuhara, was singled out by students for her “contagious energy,” “engaging teaching style” and “efforts to know her students” — even while running large introductory programming courses. It is an approach that has left a lasting impression on those who take her classes, and inspired many to pursue careers in computing.
“Her approachable character and genuine concern for her students’ well-being always creates a supportive environment,” Bolante said, noting that Natsuhara “not only excels in teaching, but also goes the extra mile” for her TAs by empowering them to contribute to course development and supporting their professional growth.
‘We will be cheering you on’
Speaking of professional growth, Balazinska had encouraging words for the graduates about to flip their tassels and depart the arena as Allen School alumni.
“You are starting your careers at an especially challenging time for society. But along with those challenges come many opportunities,” Balazinska reminded the graduates in the arena. “Opportunities to use your Allen School education, your passion, your kindness, and your creativity to make a positive impact on the people and the world around you.
“And just as we did tonight, we will be cheering you on.”
Congratulations to all of our graduates! We can’t wait to see what your next chapter brings!
Digital pathology promises to revolutionize medicine by transforming tissue samples into high-resolution images that can be shared and analyzed using powerful new computational tools. Instead of looking at single slides through a microscope, scientists and clinicians can analyze vast quantities of tissue samples at once, identifying anomalies, searching for patterns, and — thanks to advances in artificial intelligence — making predictions about various disease characteristics to assist with clinical decision-making and personalize patient care.
But there are multiple challenges that have to be overcome to achieve this bold new vision for medicine, from the scarcity of real-world data to the amount of compute required for model pretraining. Now, thanks to researchers at the University of Washington’s Allen School, Microsoft Research and Providence, the prospects for digital pathology are looking even brighter — around a billion times brighter. In a paper recently published in the journal Nature, the team unveiled Prov-GigaPath, a groundbreaking new open-access foundation model for digital pathology that combines real-world, whole-slide data with individual image tiles at an unprecedented scale.
Prov-GigaPath was developed with what the team believes to be the largest whole-slide pretraining effort to date, using samples collected from over 30,000 cancer center patients in the Providence health network. Prov-Path — a portmanteau of “Providence” and “pathology” — includes more than 1.3 billion image tiles derived from roughly 170,000 whole pathology slides. That’s five times larger than another whole-slide dataset, The Cancer Genome Atlas (TCGA), and is drawn from twice the number of patients. And whereas TCGA contains only tissue resections, the data underpinning Prov-GigaPath includes both resections and biopsies covering 31 major tissue types.
According to co-first author and Allen School Ph.D. student Hanwen Xu, the scale of the inputs, coupled with finer details about the samples, gives Prov-GigaPath the edge when it comes to potential integration with clinical workflows.
“The Prov-Path dataset is both really diverse and really robust, including meta information such as genomic mutation profiles, longitudinal data and real-world pathology reports,” explained Xu, “Those details provide us with a great opportunity to develop stronger models that can handle a wide range of real-world tasks, like tumor detection, cancer staging, and mutation prediction.”
The researchers developed Prov-GigaPath using an efficient vision transformer architecture, GigaPath, that was built on a version of LongNet’s dilated attention mechanism. This approach allowed for efficient aggregation of the tens of thousands of image tiles contained in a single slide, thus significantly lowering the computational cost for pretraining compared to the standard transformer model. It also enables Prov-GigaPath to pick out patterns other models cannot by embedding image tiles as visual tokens; not only is it trained on the characteristics of the individual tokens, but it can also detect patterns across a sequence of tokens that corresponds to an entire slide.
“The key technical challenge is that whole-slide pathology images are extremely large compared to other images studied in the computer vision domain. A pathology image could be as large as 120,000 by 120,000 pixels,” noted Allen School professor and co-corresponding author Sheng Wang. “On one hand, it is challenging for a pathologist to review the entire slide, necessitating the development of automated AI approaches. On the other hand, it is challenging for existing generative AI models, which are often transformer-based, to be scaled to sequences from such large images due to the computational burden.
“We address this problem by using a new neural network architecture that can scale to very long sequences,” Wang continued. “As a result, Prov-GigaPath makes predictions based on global patterns as well as localized ones, which yields state-of-the-art performance on multiple prediction tasks.”
Those tasks range from standard histopathology tasks like cancer subtyping to more challenging tasks like mutation prediction. In all, Wang and his colleagues evaluated Prov-GigaPath on a set of 26 prediction tasks, comparing their new model’s performance against that of other digital pathology foundation models such as HIPT and REMEDIS trained on more limited datasets. Prov-GigaPath outperformed them all on 25 out of the 26 tasks — even when the competing model was pretrained on the same dataset used for the task and Prov-GigaPath was not. The new model also set a new benchmark for predicting cancer subtypes from images, outperforming other models on each of nine cancer types; for six of the subtypes, the performance improvement over the next-best model was significant.
According to Xu, these results point to the potential utility of the team’s approach for a variety of downstream tasks — for cancer, and beyond.
“With Prov-GigaPath, we showed how to build a model that can do representation learning of high-resolution image data efficiently at a really large scale,” said Xu. “I’m excited to see how researchers will take this base model and apply it to other biomedical problems to advance AI-assisted diagnostics and decision support.”
Researchers and clinicians can freely access Prov-GigaPath on Hugging Face. Going forward, Xu and Wang hope to extend this approach to other imaging data, such as that derived very large microscopy images for proteins and single cells. They are actively seeking pathologists and physicians within UW Medicine who would like to collaborate on this effort, particularly those who are interested in using the existing model to analyze their data or those with large amounts of imaging data that the team could leverage to train a new model.
When it comes to the field of human-computer interaction, University of Washington professor Shwetak Patel aims, in his own words, “to think outside the box and challenge existing assumptions.” Patel, who holds the Washington Research Foundation Entrepreneurship Endowed Professorship in the Allen School and the UW Department of Electrical & Computer Engineering, has repeatedly put that philosophy into practice, inventing entirely new areas of research — and even new industries — in the process. Last week, the Association for Computing Machinery’s Special Interest Group on Computer Human Interaction inducted Patel into the SIGCHI Academy in honor of his trailblazing contributions in health, sustainability and interaction research.
Patel joined the UW faculty in 2008, when he established the Ubicomp Lab to explore novel sensing and interaction technologies. In parallel with his work on projects such as smart paper and on-body sensing using ultrasound, Patel began playing with his phone. But rather than obsessing over Candy Crush or Sudoku, he fixated on the potential to repurpose this nearly ubiquitous device that combined sensing, data processing and communication to expand access to health care — particularly for people in low-resource settings.
The lab’s release of SpiroSmart, the first mobile app for measuring lung function by having a patient exhale into the phone’s microphone, proved to be a game changer.
“Instead of having to travel to a clinic with a spirometry device, people with chronic lung disease could use SpiroSmart to measure their lung function in their own home,” Patel said. “We showed how these inexpensive built-in sensors could be used to augment patient care by supporting routine screening and monitoring.”
As the sensors in phones got more sophisticated, so, too, did the ways in which Patel sought to use them. Case in point: the camera, which Patel and his collaborators used to prototype new screening methods for infant jaundice, anemia, adult jaundice — an early indicator of pancreatic cancer — and brain injury, along with measuring vital signs such as heart and respiration rate via video. With these and other projects, Patel and his colleagues helped to establish the new field of mobile health sensing. They formed a startup, Senosis, to commercialize this work that was subsequently acquired by Google. He now divides his time between that company, where he is a Distinguished Scientist and Head of Health Technologies, and UW, where he serves as the Allen School’s Associate Director for Development & Entrepreneurship.
That body of work only scratched the surface of what smartphones can do when it comes to monitoring and managing our health. Last year, Patel and his collaborators touched on a new way to use the capacitive sensing capabilities of the phone’s screen to measure blood glucose. Using a modified version of widely available test strips that incorporates an inexpensive biosensor and draws power from the flash, they created a tool called GlucoScreen that communicates test data via simulated taps on the phone’s screen. The app then processes the results right on the phone, producing a blood glucose reading with an accuracy comparable to commercial glucometers.
Their proof of concept showed promise for mass screening for prediabetes — and potentially much more.
“Now that we’ve shown we can build electrochemical assays that can work with a smartphone instead of a dedicated reader, you can imagine extending this approach to expand screening for other conditions,” Patel remarked at the time.
Capacitive sensing came in handy for another recent project, FeverPhone, that used a combination of the touchscreen and the thermistors, typically used to monitor battery temperature, to instead measure a person’s body temperature. Patel and his team subsequently put the phone down in favor of another popular accessory that could do double duty as a temperature sensor: the Thermal Earring. More than a fashion statement, this piece of wearable and rechargeable bling can measure changes in earlobe temperature throughout the day, rather than reporting a daily average like other wearables, with potential applications for monitoring fever, stress, ovulation and more.
While Patel enjoys the technical challenge of expanding how and what sensors can measure, it’s the human side of research that he finds most compelling — and most rewarding.
“HCI research has always been critical to our work in health in terms of really understanding user needs,” said Patel. “It’s how we ensure what we are building has the best chance for the biggest impact across the world.”
But Patel is keen to ensure that impact does not come at the expense of the environment by making sensors themselves more sustainable. For example, he recently contributed to the development of a printed circuit board made of a type of polymer called vitrimer that can be repeatedly recycled. Both the polymer and the electronic components in vPCBs can be reused without degrading their performance, thus reducing a significant source of e-waste.
The project is the latest in a long line of research supporting sustainability that Patel has pursued during his career. Other contributions include a method for measuring residential power and water usage at the device or fixture level using a single sensor and an ultra-low power, whole-home sensing system to monitor for potential hazards.
Patel is joined in this year’s class of SIGCHI Academy inductees by Allen School adjunct faculty member Julie Kientz, professor and chair of the UW Human Centered Design & Engineering Department, who was honored for work to advance interaction technologies that support child development, accessibility, education and health. Another HCDE faculty member and Allen School adjunct, Kate Starbird, received the SIGCHI Societal Impact Award for her research into the use of communications technologies during crisis events and techniques for addressing the spread of misinformation and disinformation online.
Former Allen School professor James Landay, now a faculty member at Stanford University and associate director of Stanford’s Institute for Human-centered Artificial Intelligence (HAI), earned a Lifetime Research Award for his contributions to mobile and ubiquitous computing, technologies for supporting education and behavior change, user interface design and more.
Fascinated by the inner workings of machine learning models for data-driven decision-making, Allen School professor Simon Shaolei Du constructs their theoretical foundations to better understand what makes them tick and then designs algorithms that translate theory into practice. Du’s faculty colleague Adriana Schulz, meanwhile, has clocked how to make the act of making more accessible and sustainable through novel techniques in computer-aided design and manufacturing, drawing upon advances in machine learning, fabrication, programming languages and more.
Those efforts received a boost from the Alfred P. Sloan Foundation earlier this year, when Schulz and Du were recognized among the 2024 class of Sloan Research Fellows representing the next generation of scientific leaders.
Simon Shaolei Du: Unlocking the mysteries of machine learning
Deep learning. Reinforcement learning. Representation learning. Recent breakthroughs in the training of large-scale machine learning models are transforming data-driven decision-making across a variety of domains and fueling developments ranging from self-driving vehicles to ChatGPT. But while we know that such models work, we don’t really know why.
“We still don’t have a good understanding of why these paradigms are so powerful,” Du explained in a UW News release. “My research aims to open the black box.”
Already, Du has been able to poke several holes in said box by demystifying several principles underlying the success of such models. For example, Du offered the first proof for how gradient descent optimizes the training of over-parameterized deep neural networks — so-called because the number of parameters significantly exceeds the minimum required relative to the size of the training dataset. Du and his co-authors showed that, with sufficient over-parameterization, gradient descent could find the global minima to achieve zero training loss even though the objective function is non-convex and non-smooth. Du was also able to explain how these models generalize so well despite their enormous size by proving a fundamental connection between deep neural network learning and kernel learning.
Another connection Du has investigated is that between representation learning and recent advances in computer vision and natural language processing. Representation learning bypasses the need to train on each new task from scratch by drawing upon the commonalities underlying different but related tasks. Du was keen to understand how using large-scale but low-quality data to pre-train foundation models in the aforementioned domains effectively improves their performance on downstream tasks for which data is scarce — a condition known as few-shot learning. He and his collaborators developed a novel theoretical explanation for this phenomenon by proving that a good representation combined with a diversity of source training data are both necessary and sufficient for few-shot learning on a target task. Following this discovery, Du contributed to the first active learning algorithm for selecting pre-training data from the source task based on their relevance to the target task to make representation learning more efficient.
From representation to reinforcement: When it comes to modeling problems in data-driven decision-making, the latter is the gold standard. And the standard wisdom is that long planning horizons and large state spaces are why it is so difficult — or at least it was. Du and his collaborators turned the first assumption on its head by showing that sample complexity in reinforcement learning is not dependent upon whether the planning horizon is long or short. Du further challenged prevailing wisdom by demonstrating that a good representation of the optimal value function — which was presumed to address the state space problem — is not sufficient to ensure sample-efficient reinforcement learning across states.
“My goal is to design machine learning tools that are theoretically principled, resource-efficient and broadly accessible to practitioners across a variety of domains,” said Du. “This will also help us to ensure they are aligned with human values, because it is apparent that these models are going to play an increasingly important role in our society.”
Adriana Schulz: Making a mark by remaking manufacturing-oriented design
AI’s influence on design is already being felt in a variety of sectors. But despite its promise to enhance quality and productivity, its application to design for manufacturing has lagged. So, too, has the software side of the personalized manufacturing revolution, which has failed to keep pace with hardware advances in 3D-printing, machine knitting, robotics and more. This is where Schulz aims to make her mark.
“Design for manufacturing is where ideas are transformed into products that influence our daily lives,” Schulz said. “We have the potential to redefine how we ideate, prototype and produce almost everything.”
To realize this potential, Schulz develops computer-aided design tools for manufacturing that are grounded in the fundamentals of geometric data processing and physics-based modeling and also draw from domains such as machine learning and programming languages. The goal is to empower users of varying skill levels and backgrounds to flex their creativity while optimizing their designs for functionality and production.
One strategy is to treat design and fabrication as programs — that is, a set of physical instructions — and leverage formal reasoning and domain-specific languages to enable users to adjust plans on the fly based on their specific goals and constraints. Schulz and her collaborators took this approach with Carpentry Compiler, a tool for exploring tradeoffs between production time, cost of materials and other factors of their design before generating fabrication plans. She subsequently parlayed advances in program synthesis into a new tool for efficiently optimizing plans for both design and fabrication at the same time. Leveraging a technique called equivalence graphs, or e-graphs, Schulz and her team took advantage of inherent redundancies across design variations and fabrication alternatives to eliminate the need to recompute the fabrication cost from scratch with every design change. In a series of experiments, the new framework was shown to reduce project costs by as much as 60%.
Rising capabilities in AI have also given rise to a new field in computer science known as neurosymbolic reasoning, a hybrid approach to representing visual and other types of data that combines techniques from machine learning and symbolic program analysis. Schulz leveraged this emerging paradigm to make it easier for users of parametric CAD models for manufacturing to explore and manipulate variations of their designs while automatically retaining essential structural constraints. Typically, CAD users who wish to engage in such exploration have to go to the time and trouble of modifying multiple parameters simultaneously and then sifting through a slew of irrelevant outcomes to identify the meaningful ones. Schulz and her team streamlined the process by employing large language and image models to infer the space of meaningful variations of a shape, and then applying symbolic program analysis to identify common constraints across designs. Their system, ReparamCAD, offers a more intuitive, efficient and interactive approach to conventional CAD programs.
In addition to introducing more flexible design processes, Schulz has also contributed to more flexibility on the factory floor. Many assembly lines rely on robots that are task-specific, making it complex and costly to pivot the line to new tasks. Schulz and her colleagues sidestepped this problem by enabling the creation of 3D-printable passive grippers that can be swapped out at the end of a robotic arm to handle a variety of objects — including irregular shapes that would be a challenge for conventional grippers to manipulate. She and her team developed an algorithm that, when fed a 3D model of an object and its orientation, co-optimizes a gripper design and lift trajectory that will enable the robot to successfully pick up the item.
Whether it’s repurposed robots or software that minimizes material waste, Schulz’s past work offers a glimpse into manufacturing’s future — one that she hopes will be friendlier not just to people, but also to the planet.
“Moving forward, I plan to expand my efforts on sustainable design, exploring innovative design solutions that prioritize reusability and recyclability to foster circular ecosystems,” she told UW News.
Two other researchers with Allen School connections were among 22 computer scientists across North America to have been recognized among the 2024 class of Sloan Research Fellows. Justine Sherry (B.S., ‘10) is a professor at Carnegie Mellon University, where she leads research to modernize hardware and software for implementing middleboxes to make the internet more reliable, efficient, secure and equitable for users. Former visiting student Arvind Satyanarayan, who earned his Ph.D. from Stanford University while working with Allen School professor Jeffrey Heer in the UW Interactive Data Lab, is a professor at MIT, where he leads the MIT Visualization Group using interactive data visualization to explore intelligence augmentation that will amplify creativity and cognition while respecting human agency.
In addition, a third UW faculty member, chemistry professor Alexandra Velian, earned a Sloan Research Fellowship for her work on new materials to advance decarbonization, clean energy and quantum information technologies.
For as long as she can remember, Allen School professor Su-In Lee wanted to be a scientist and professor when she grew up. Her father, who majored in marine shipbuilding engineering, would sit her down at their home in Korea with a pencil and paper to teach her math. Those childhood lessons instilled in Lee not just a love of the subject matter but also of teaching; her first pupil was her younger brother, with whom she shared what she had learned about arithmetic and geometry under her father’s tutelage.
Fast forward a few decades, and Lee is putting those lessons to good use in training the next generation of scientists and engineers while advancing explainable artificial intelligence for biomedical applications. She is also adding up the accolades in response to her work. In February, the International Society for Computational Biology recognized Lee with its 2024 ISCB Innovator Award, given to a mid-career scientist who has consistently made outstanding contributions to the field of computational biology and continues to forge new directions; last month, the American Institute for Medical and Biological Engineering inducted her into the AIMBE College of Fellows — putting her among the top 2% of medical and biological engineers. Yesterday, the Ho-Am Foundation announced Lee as the 2024 Samsung Ho-Am Prize Laureate in Engineering for her pioneering contributions to the field of explainable AI.
As the saying goes, good things come in threes.
“This is an incredible honor for me, and I’m deeply grateful for the recognition,” said Lee, who holds the Paul G. Allen Professorship in Computer Science & Engineering and directs the AI for bioMedical Sciences (AIMS) Lab at the University of Washington. “There are so many deserving researchers, I am humbled to have been chosen. One of the most rewarding aspects of my role as a faculty member and scientist is serving as an inspiration for young people. As AI continues to transform science and society, I hope this inspires others to tackle important challenges to improve health for everyone.”
The Ho-Am Prize, which is often referred to as the “Korean Nobel Prize,” honors people of Korean heritage who have made significant contributions in academics, the arts and community service or to the welfare of humanity through their professional achievements. Previous laureates include Fields Medal-winning mathematician June Huh and Academy Award winning director Joon-ho Bong of “Parasite” fame. In addition to breaking new ground through her work, Lee has broken the glass ceiling: She is the first woman to receive the engineering prize in the award’s 34-year history and, still in her 40s, one of the youngest recipients in that category — a testament to the outsized impact she has made so early in her career.
From 1-2-3 to A-B-C
As a child, she may have learned to love her 1-2-3s; as a researcher, Lee became more concerned with her A-B-Cs: AI, biology and clinical medicine.
“The future of medicine hinges on the convergence of these disciplines,” she said. “As electronic health records become more prevalent, so too will omic data, where AI will play a pivotal role.”
Before her arrival at Stanford to pursue her Ph.D., the “C” could have stood for “cognition,” which marked her first foray into researching AI models like deep neural networks. For her undergraduate thesis, she developed a DNN for hand-written digit recognition that won the 2000 Samsung Humantech Paper Award. In the Stanford AI Lab, Lee shifted away from cognition and toward computational molecular biology, enticed by the prospect of identifying cures for diseases such as Alzheimer’s. She continued to be captivated by such questions after joining the Allen School faculty in 2010.
Six years after her arrival, Lee’s research took an unexpected — but welcome — turn when Gabriel Erion Barner knocked on her office door. Erion Barner was a student in the UW’s Medical Scientist Training Program, or MSTP, and he had a proposition.
“MSTP students combine a medical degree with a Ph.D. in a complementary field, and they are amazing,” said Lee. “Gabe was excited about AI’s potential in medicine, so he decided he wanted to do a Ph.D. in Computer Science & Engineering working with me. There was just one problem: our Ph.D. program didn’t have a process to accommodate students like Gabe. So, we created one.”
Erion Barner formally enrolled the following year and became the first MSTP student to graduate with an Allen School degree, in the spring of 2021. But he wouldn’t be the last. Joseph Janizek (Ph.D., ‘22) and current M.D. student Alex DeGrave subsequently sought out Lee as an advisor. Erion Barner has since moved on to Harvard Medical School to complete his medical residency, while Janizek is about to do the same at Lee’s alma mater, Stanford.
Meanwhile, Lee’s own move into the clinical medicine aspect of the A-B-Cs was complete.
Getting into SHAP
Beyond the A-B-Cs, Lee subscribes to a philosophy she likens to latent factor theory. A term borrowed from machine learning, latent factor theory posits that there are underlying — and unobserved — factors that impact upon the observable ones. Lee applies this theory when selecting the research questions in which she will invest her time. It’s part of her quest to identify that underlying factor which will transcend multiple problems, domains and disciplines.
So, when researchers began applying AI to medicine, Lee was less interested in making the models’ predictions more accurate in favor of understanding why they made the predictions they did in the first place.
“I just didn’t want to do it,” she said of pursuing the accuracy angle. “Of course we need the models to be accurate, but why was a certain prediction made? I realized that addressing the black box of AI — those latent factors — would be helpful for clinical decision-making and for clinicians’ perceptions of whether they could trust the model or not.” This principle, she noted, extends beyond medical contexts to areas like finance.
Lee discovered that the questions she raised around transparency and interpretability were the same questions circulating in the medical community. “They don’t just want to be warned,” she said. “They want to know the reasons behind the warning.”
In 2018, Lee and her team began to shine a lot on the models’ reasoning. Their first clinical paper, appearing on the cover of Nature Biomedical Engineering, described a novel framework for not only predicting but also providing real-time explanations for a patient’s risk of developing hypoxemia during surgery. The framework, called Prescience, relied on SHAP values — short for SHapley Additive exPlanations — which applied a game theoretic approach to explain the weighted outputs of a model. The paper, which is broadly applicable to many domains, has garnered more than 1,300 citations. In follow-up work, Lee and her team unveiled CoAI, or Cost-Aware Artificial Intelligence, which applied Shapley values to prioritize which patient risk factors to evaluate in emergency or critical care scenarios given a budget of time, resources or both.
AI under the microscope
Lee and her collaborators subsequently shifted gears in the direction of developing fundamental AI principles and techniques that could transcend any single clinical or research question. Returning to her molecular biology roots, Lee was curious about how she could use explainable AI to solve common problems in analyzing single-cell datasets to understand the mechanisms and treatment of disease. But to do that, researchers would need to run experiments in which they disentangle variations in the target cells from those in the control dataset to identify which factors are relevant and which merely confound the results. To that end, Lee and her colleagues developed ContrastiveVI, a deep learning framework for applying a technique known as contrastive analysis to single-cell datasets. The team published their framework in Nature Methods.
“By addressing contrastive scientific questions, we can help solve many problems,” Lee explained. “Our methods enable us to handle these nuanced datasets effectively.”
Up to that point, the utility of CA in relation to single cell data was limited; for once, latent factors — in this case, the latent variables typically used to model all variations in the data — worked against the insights Lee sought. ContrastiveVI solves this problem by separating those latent variables, which are shared across both the target and control datasets, from the salient variables exclusive to the target cells. This enables comparisons of, for example, the differences in gene expression between diseased and healthy tissue, the body’s response to pathogens or drugs, or CRISPR-edited versus unedited genomes.
Lee and her colleagues also decided to put medical AI models themselves under the microscope, applying more scrutiny to their predictions by developing techniques to audit their performance in domains ranging from dermatology to radiology. As they discovered, even when a model’s predictions are accurate, they should be treated with a healthy dose of skepticism.
“I’m particularly drawn to this direction because it underscores the importance of understanding AI models’ reasoning processes before blindly using them — a principle that extends across disciplines, from single-cell foundation models, to drug discovery, to clinical trial identification,” she said.
As one example, Lee and her team revealed how models that analyze chest x-rays to predict whether a patient has COVID-19 tend to rely on so-called shortcut learning, which leads them to base their predictions on spurious factors rather than genuine medical pathology. The journal Nature highlighted their work the following year. More recently, Lee co-authored a paper in Nature Biomedical Engineering that leveraged generative AI to audit medical-image classifiers used for predicting melanoma, finding that they rely on a mix of clinically significant factors and spurious associations. The team’s method, which favored the use of counterfactual images over conventional saliency maps to make the image classifiers’ predictions medically understandable, could be extended to other domains such as radiology and ophthalmology. Lee recently published her futuristic perspectives on the clinical potential of counterfactual AI in The Lancet.
Going forward, Lee is “really excited” about tackling the black-box nature of medical-image classifiers by automatically annotating semantically meaningful concepts using image-text foundation models. In fact, she has a paper on this very topic that is slated to be published in Nature Medicine this spring.
“All research topics hold equal fascination for me, but if I had to choose one, our AI model auditing framework stands out,” Lee said. “This unique approach can be used to uncover flaws in the reasoning process of AI models, which could solve a lot of society’s concerns about AI. We humans have a tendency to fear the unknown, but our work has demonstrated that AI is knowable.”
Live long and prosper
From life-and-death decisionmaking in the ER, to a more nuanced approach to analyzing life and death: One of the topics Lee has investigated recently concerns applying AI techniques to massive amounts of clinical data to understand people’s biological ages. The ENABL Age framework — which stands for ExplaiNAble BioLogical Age — applied explainable AI techniques to all-cause and cause-specific mortality to predict individuals’ biological ages and identify the underlying risk factors that contributed to those predictions to potentially inform clinical decision-making. The paper was featured on the cover of Lancet Healthy Longevity last December.
Lee hopes to build on this work to uncover the drivers of aging as well as the underlying mechanisms of rejuvenation — a topic she looks forward to exploring with her peers at a workshop she is co-leading in May. She is also keen to continue applying AI insights to identify therapeutic targets for Alzheimer’s disease, which is one of the 10 deadliest diseases in the U.S., as well as other neurodegenerative conditions. Her prior work on this topic was published in Nature Communications in 2021 and Genome Biology in 2023.
Even with AI’s flaws — some of which she herself has uncovered — Lee believes that it will prove to be a net benefit to society.
“Like other technologies, AI carries risks,” Lee acknowledged. “But those risks can be mitigated through the use of complementary technologies such as explainable AI that allow us to interpret complex AI models to promote transparency, accountability, and ultimately, trust.”
If Lee’s father, who passed away in 2013, could see her now, he would no doubt be impressed with how those early math lessons added up.
With large language models dominating the discourse these days, artificial intelligence researchers find themselves increasingly in the limelight. But while LLMs continue to grow in size — and capture a growing share of the public’s imagination — their utility could be limited by their voracious appetite for compute resources and power.
This is where the systems researchers have an opportunity to shine. And it so happens that one of the brightest sparks working at the intersection of AI and systems can be found right here at the University of Washington.
Zihao Ye, a fourth-year Ph.D. student in the Allen School, builds serving systems for foundation models and sparse computation to improve the efficiency and enhance the programmability of emerging architectures such as graph networks and the aforementioned LLMs. To support his efforts, NVIDIA recently selected Ye as one of 10 recipients of the company’s highly competitive Graduate Research Fellowship. The honorees are described by NVIDIA Chief Scientist Bill Dally as “among the most talented graduate students in the world.”
Ye applies his talents to the development of techniques that enable machine learning systems with large and sparse tensors — and their large workloads — to run more efficiently in resource-constrained contexts such as smartphones and web browsers. To that end, he teamed up with professor Luis Ceze and alum Tianqi Chen (Ph.D., ’19), now a faculty member at Carnegie Mellon University and co-founder alongside Ceze of Allen School spinout OctoAI, in the Allen School’s interdisciplinary SAMPL group.
“Zihao is a deep thinker who is diligent about background research and extremely skilled in systems building. That is a powerful combination in a systems researcher,” said Ceze, who holds the Edward D. Lazowska Professorship in Computer Science & Engineering at the Allen School and also serves as CEO of OctoAI. “He also has a good eye for research problems and is a fantastic colleague and teammate.”
Ye’s eye for research problems led him to pursue what Ceze termed a “very elegant idea” for overcoming the so-called hardware lottery when programming neural networks to run on modern GPUs. One of the main obstacles is that neural networks, such as those used in graph analytics, are sparse tensor applications, whereas modern hardware is designed primarily for dense tensor operations. To solve the problem, Ye and his colleagues created SparseTIR, a composable programming abstraction that supports efficient sparse model optimization and compilation. SparseTIR decomposes a sparse matrix into multiple sub-matrices with homogeneous sparsity patterns to enable more hardware-friendly storage, while offloading the associated computation to different compute units within GPUs to optimize runtime performance. The team layered their approach onto Apache TVM, an open-source framework that supports the deployment of machine learning workloads on any hardware backend.
“The number of sparse deep learning workloads is rapidly growing, while at the same time, hardware backends are evolving toward accelerating dense operations,” Ye explained. “SparseTIR is flexible by design, enabling it to be applied to any sparse deep learning workload while leveraging new hardware and systems advances.”
In multiple instances, the team found that SparseTIR outperformed highly optimized sparse libraries on NVIDIA HW. Ye and his colleagues earned a Distinguished Artifact Award at ASPLOS ’23, the preeminent conference for interdisciplinary systems research, for their work.
Based on their scale and the amount of computation required, LLMs are fast becoming one of the most significant hardware workloads — and a potentially significant stumbling block. One of the critical factors for efficient LLM serving is kernel performance on GPUs. To that end, Ye and his collaborators examined LLM-serving operators to identify performance bottlenecks and developed an open-source library, FlashInfer, for enhanced LLM serving using inference acceleration techniques.
Ye also contributed to Punica, a project led by Ceze and faculty colleague Arvind Krishnamurthy to enable inference of multiple LLMs fine-tuned through low-rank adaptation from a common underlying pretrained model on a single GPU. The team’s approach, which significantly reduces the amount of memory and computation required for such tasks, earned first runner-up in the 2023 Madrona Prize competition.
“Zihao’s work is already having a direct impact,” Ceze noted. “He is a true ML systems star in the making.”
“I’m honored to receive the Graduate Research Fellowship from NVIDIA, which leads the way in research and development of machine learning acceleration. I’m particularly excited to learn from industry experts and to build good systems together for the greater good,” said Ye.
“I would like to thank Luis, who provided the best guidance and advice, and all my collaborators over the years,” he continued. “UW has a super-collaborative environment where I can team up with people who bring different knowledge and backgrounds, which has greatly expanded my horizons and inspired my research.”
Read more about the 2024 NVIDIA Graduate Research Fellowship recipients on the company’s blog.
Melanoma is one of the most commonly diagnosed cancers in the United States. On the bright side, the five-year survival rate for people with this type of skin cancer is nearly 100% with early detection and treatment. And the prognosis could be even brighter with the emergence of medical-image classifiers powered by artificial intelligence, which are already finding their way into dermatology offices and consumer self-screening apps.
Such tools are used to assess whether an image depicts melanoma or some other, benign skin condition. But researchers and dermatologists have been largely in the dark about the factors which determine the models’ predictions. In a recent paper published in the journal Nature Biomedical Engineering, a team of researchers at the University of Washington and Stanford University co-led by Allen School professor Su-In Lee shed new light on the subject. They developed a framework for auditing medical-image classifiers to understand how these models arrive at their predictions based on factors that dermatologists determine are clinically significant — and where they miss the mark.
“AI classifiers are becoming increasingly popular in research and clinical settings, but the opaque nature of these models means we don’t have a good understanding of which image features are influencing their predictions,” explained lead author and Allen School Ph.D. student Alex DeGrave, who works with Lee in the AI for bioMedical Sciences (AIMS) Lab and is pursuing his M.D./Ph.D. as part of the UW Medical Scientist Training Program.
“We combined recent advances in generative AI and human medical expertise to get a clearer picture of the reasoning process behind these models,” he continued, “which will help to prevent AI failures that could influence medical decision-making.”
DeGrave and his colleagues employed an enhanced version of a technique known as Explanation by Progressive Exaggeration. Using generative AI — the same technology behind popular image generators such as DALL-E and Midjourney — they produced thousands of pairs of counterfactual images, which are images that have been altered to induce an AI model to make a different prediction from that associated with the original image. In this case, the counterfactual pairs corresponded with reference images depicting skin lesions associated with melanoma or non-cancerous conditions that may appear similar to melanoma, such as benign moles or wart-like skin growths called seborrheic keratoses.
The team trained the generator alongside a medical-image classifier to produce counterfactuals that resembled the original image, but with realistic-looking departures in pigmentation, texture and other factors that would prompt the classifier to adjudge one of the pair benign and the other malignant. They then repeated this process for a total of five AI medical-image classifiers, including an early version of an academic classifier called ModelDerm — which was subsequently approved for use in Europe — and two consumer-facing smartphone apps, Scanoma and Smart Skin Cancer Detection.
In order to infer which features contribute to a classifier’s reasoning and how, the researchers turned to human dermatologists. The physicians were asked to review the image pairs and indicate which of the counterfactuals most suggested melanoma, and then to note the attributes that differed between the images in each pair. The team aggregated those insights and developed a conceptual model of each classifier’s reasoning process based on the tendency for an attribute to sway the model towards a prediction of benign or malignant as well as the frequency with which each attribute appeared among the counterfactuals as determined by the human reviewers.
During its audit, the team determined that all five classifiers based their predictions, at least in part, on attributes that dermatologists and the medical literature have deemed medically significant. Such attributes include darker pigmentation, the presence of atypical pigmentation patterns, and a greater number of colors — each of which point to the likelihood a lesion is malignant.
In cases where the classifier failed to correctly predict the presence of melanoma, the results were mixed. In certain instances, such as when the level of pigmentation yielded an erroneous prediction of malignancy when the lesion was actually benign, the failures were deemed to be reasonable; a dermatologist would most likely have erred on the side of caution and biopsied the lesion to confirm. But in other cases, the audit revealed the classifiers were relying not so much on signal but on noise. For example, the pinkness of the skin surrounding the lesion or the presence of hair influenced the decision-making of one or more classifiers; typically, neither attribute would be regarded by dermatologists as medically relevant.
“Pinkness of the skin could be due to an image’s lighting or color balance,” explained DeGrave. “For one of the classifiers we audited, we found darker images and cooler color temperatures influenced the output. These are spurious associations that we would not want influencing a model’s decision-making in a clinical context.”
According to Lee, the team’s use of counterfactual images, combined with human annotators, revealed insights that other explainable AI techniques are likely to overlook.
“Saliency maps tend to be people’s go-to for applying explainable AI to image models because they are quite effective at identifying which regions of an image contributed most to a model’s prediction,” she noted. “For many use cases, this is sufficient. But dermatology is different, because we’re dealing with attributes that may overlap and that manifest through different textures and tones. Saliency maps are not suited to capturing these medically relevant factors.
“Our counterfactual framework can be applied in other specialized domains, such as radiology and ophthalmology, to make AI models’ predictions medically understandable,” Lee continued. “This understanding is essential to ensuring their accuracy and utility in real-world settings, where the stakes for both patients and physicians are high.”
Lee and DeGrave’s co-authors on the paper include Allen School alum and MSTP student Joseph Janizek (Ph.D., ‘22), Stanford University postdoc Zhuo Ran Cai, M.D., and Roxana Daneshjou, M.D., Ph.D., a faculty member in the Department of Biomedical Data Sciences and in Dermatology at Stanford University.