“Nothing about us without us.”
That statement has become a rallying cry for people with disabilities to ensure they have a direct voice in shaping the policies and conditions that, in turn, shape their access to employment, education, and lately, technology. With the growing proliferation of human-centered applications powered by artificial intelligence, it has become clear that the question of who will benefit from these emerging technologies will be determined in no small part by who makes them.
Venkatesh Potluri, a Ph.D. student currently working with professor Jennifer Mankoff in the Allen School’s Make4all Group, is among the makers determined to advance a more inclusive approach. He is also legally blind, which gives him firsthand knowledge of the barriers people with disabilities face in their interactions with technology on both the front and back ends.
“Conversations about inclusion in AI typically highlight the role of fair hiring practices and equal access to education,” Potluri said. “For people with disabilities, having access to the actual tools of development is equally critical to making the field and the technology it produces more inclusive. Right now, when it comes to the development of human-centered machine learning systems, that access is extremely limited.”
This is particularly the case for blind or visually impaired (BVI) developers, a group historically underrepresented in computing to begin with. For Potluri and his peers, access to critical elements of modern programming, such as user interface and user experience design, is essentially non-existent. Human-centered ML systems depend heavily on these elements, however; without accessibility support for tools beyond basic programming, BVI developers are sidelined from the development process in this rapidly growing area. Beyond the impact on the individuals’ careers — which is bad enough — the exclusion of developers who are BVI or have other disabilities from teams developing these technologies can have real-world repercussions across the field of AI.
“Human-centered machine learning holds a lot of promise for solving important societal problems ranging from health care to transportation. But those solutions won’t benefit everyone if diverse needs and perspectives aren’t taken into account,” Potluri explained. “And in some cases, embedded biases in these technologies can result in real harm. For example, a self-driving car that relies on a collision avoidance system trained solely on a ‘typical’ pedestrian profile may not recognize a wheelchair user crossing the street.”
Earlier this year, Potluri was named a 2022 Apple Scholar in AI/ML for his efforts to advance a new paradigm in UI design, one that uses AI to improve accessibility for BVI developers. Potluri’s attempts to investigate how blind or visually impaired computer users understand visual semantics such as shape, spacing, and size of user interface elements show that users do want access to this information, but many current screen readers do not surface it. Current UI accessibility tools offer basic — one might even say paltry — descriptions of core functionality like menus and links, while disregarding visual semantics such as shape, size, spatial arrangement and overall consistency. With the support of his Ph.D. fellowship from Apple, Potluri plans to construct a dataset and machine learning models for automatically generating UI descriptions that incorporate these rich visual semantics. His goal is to address a fundamental challenge BVI users have in understanding visual layout and aesthetics.
The enhanced descriptions produced by his new models will lay the groundwork for improvements in text-based search for design templates as well as machine understanding of UIs. And that’s just phase one; Potluri intends to build upon that foundation by developing an accessible UI editor that will improve the design experience for BVI developers. The new tool would enable developers to search for, apply and iteratively assess design templates, and to obtain suggestions for repairing deviations from convention that could detract from the user experience.
Potluri’s efforts will empower BVI developers to make meaningful UI design decisions that determine form as well as function. Like previous advances in accessible technology, the benefits will most certainly extend beyond people with disabilities.
“Ultimately, the results of this work will improve the quality of AI-based assistance for BVI and sighted users alike when designing UIs for BVI and sighted users alike,” Potluri noted.
Potluri is one of just 15 graduate students at universities around the world to be recognized by Apple in its latest cohort of Scholars. He and his fellow honorees were chosen based on their innovative research, leadership and collaboration, and commitment to advancing their respective fields. Previously, as a first-year Ph.D. student working with Mankoff and professor Jon Froehlich in the Allen School’s Makeability Lab, Potluri earned a Google Lime Scholarship in recognition of his leadership, academic excellence and passion for computer science and technology.
“Venkatesh is all too familiar with the limitations and biases encoded into today’s systems with regard to what BVI programmers are capable of, or even interested in, doing,” noted Mankoff. “His research is raising the bar for what’s possible, and will help to include BVI people in fields such as interface design and data science.
“Not only is Venkatesh’s work of the highest quality, but also, his work works. He is committed to developing not just ideas but practical solutions that will lift up people with disabilities by giving them equitable access to these growing segments of our field,” she continued. “I cannot emphasize enough how important this is, in a world that often still thinks of people with disabilities as the subjects of research rather than the originators.”
Learn more about the Apple Scholars in AI/ML Ph.D. Fellowship here.
Congratulations, Venkatesh!