Space-Compacting Magnification Augmented with Natural Gestures and Keyboardless Text Entry for Low Vision Smartphone Interaction

Researcher(s)

Sponsoring Agency
National Institutes of Health

Summary

For people with low-vision impairments, the smartphone has become inextricably tied to their daily lives just like the general population. It is their go-to assistive device, relying on the smartphone's built-in screen magnifier - Zoom on the iPhone and Magnifier on Android - to interact with it. But the usability of these screen magnifiers falls woefully short, adversely affecting productivity.

First, the magnifier indiscriminately magnifies the raw screen pixels, including whitespace, as a blanket operation, causing occlusion of important contextual information such as visual cues from the user's viewport. This necessitates panning over these occluded portions and mentally reconstructing the contextual information necessary for interacting with the content elements. Second, magnification gestures such as the bimanual multi-tap and multi-finger touch gestures tend to be more complex than the basic 1-finger swipe. The complexity of these touch gestures makes them cumbersome and tiring to use. Remembering the entire repertoire of gestures is also difficult. Since all these touch gestures involve some subset of finger combinations, it is easy to mix up one for the other. Third, virtual keyboards, which takes up significant screen real estate as-is, is difficult to use for text entry and editing in magnified view. Either the entire screen area is magnified including the virtual keyboard or only the display area. In the former, some of the keys are occluded from the view whereas in the latter the keys remain unmagnified. Regardless, key presses are hard to do in either cases. In sum, all these usability issues contribute to a vastly disproportionate gap in user experience and productivity between people with and without low-vision impairments.

This proposal seeks to develop the next generation screen magnifier that will bridge this wide gap in user experience. It is rooted on three novel ideas. First, instead of indiscriminately magnifying the screen content as is done now, it will do object-aware magnification by identifying the objects in the graphical interface and compacting the space between the objects so as to keep contextually related objects close together in the magnified view. Second, by leveraging the untapped built-in sensors such as accelerometer, geometric field and barometric pressure sensors, it will expand the default surface gestures to include surfaceless natural gestures for magnification operations that can be done with only one hand, thus freeing the other hand for other tasks. More importantly, these gestures will be easy-to-do and easy-to-learn and recall. Third, it will incorporate a novel keyboardless gesture-based text entry and editing technique to eliminate the difficulties that arise with virtual keyboards for text entry in magnification mode. These three ideas will inform the development of CxZoom, a transformative, next-generation smartphone screen reader for low vision. CxZoom will make interaction with smartphones far more usable for people with low vision, thereby eliminating any barriers to their productivity and empower them to utilize the power and connectivity of these devices to fully participate in this digitized economy.

Term
 -