Many web users have become accustomed to navigating page content with the help of spatial and visual cues such as navigation bars, tabs, and icons. But for those who are blind or low-vision, the proliferation of visually rich, graphical user interfaces (GUIs) make it challenging to locate and consume information online — even with the help of a screen reader or braille display. Now, thanks to Allen School professor Jennifer Mankoff and colleagues at Carnegie Mellon University, help is literally at hand with new Spatial Region Interaction Techniques (SPRITEs) that leverage a standard piece of equipment – the keyboard – to access interactive elements onscreen.
SPRITEs is a set of tools that to enable non-sighted users to access web content that may be implicitly conveyed to sighted users but is integral to browsing and navigation for all users. Whereas most most websites tend to organize content in accordance with Gestalt psychology principles — for example, grouping similar items in close proximity to each other, or the consistent placement of items in familiar locations — most commercially available screen readers are set up to access only simple page elements such as headers, links, and lists. By combining a screen reader with SPRITEs, however, non-sighted users can quickly and easily access more robust content contained in elements such as menus, tables, and maps.
As Mankoff explains, SPRITEs is designed to supplement, not supplant, screen readers to enhance the user experience and keep up with current trends in website design.
“We’re not trying to replace screen readers, or the things that they do really well,” Mankoff says in a UW News release. “This study demonstrates that we can use the keyboard to bring tangible, structured information back, and the benefits are enormous.”
The aforementioned benefits include a significant improvement in users’ ability to complete online tasks thanks to the way SPRITEs maps the keyboard to various elements of a site. The researchers focused on the corners and edges of the keyboard — with the exception of the function keys, which are reserved for browser-level controls — to make it easy for the user to find the keys that they need. In keeping with their user-centric approach, the team assigned the scrolling function to the right-most column of keys, thus enabling the user to hold onto the edge of the keyboard and easily keep track of which key they pressed last. Once a user finds what they are looking for as the screen reader speaks each object, the user interacts with their target by double-pressing a key.
Certain categories of content — for example, grouped content such as menus and search results, or elements such as tables and maps — are assigned to the numerical row of keys, with those at either end reserved for scrolling. This functionality enables non-sighted users to engage with information that would otherwise be difficult, if not impossible, for them to access using existing accessibility tools alone.
The SPRITEs keyboard layout
The researchers evaluated SPRITEs in a user study involving 10 blind or low-vision individuals experienced with accessibility tools for the web. Participants were asked to complete a set of tasks using their preferred screen reader, and then asked to complete a similar task using SPRITEs. Use of the latter produced a three-fold improvement in task completion rates in five of eight tasks, including those related to navigation, menu interaction, and tables. There was also evidence that, even in this limited study, participants began to develop a mental model of the spatial or hierarchical structure of a page as it related to the keyboard.
With SPRITEs, the researchers have found a way to extend the advantages of Gestalt-driven web design — which sighted individuals tend to take for granted — to an entirely new population of users.
“Rather than having to browse linearly through all the options, our tool lets people learn the structure of the site and then go right there,” Mankoff notes.
Mankoff and her co-authors at CMU — Ph.D. students Rushil Khurana and Elliot Lockerman and recent bachelor’s alumnus Duncan McIsaac — are presenting their paper on SPRITEs at the CHI 2018 conference in Montreal, Canada next week. Mankoff plans to continue refining SPRITEs and building in robust functionality before making it available to the public as part of WebAnywhere, a web-based screen reader developed by Allen School Ph.D. alumnus Jeffrey Bigham, now a faculty member at CMU, and Allen School professor emeritus Richard Ladner.
Read the UW News release here, visit the project web page here.