It’s natural to pull out your smartphone and check a text or email. But what if you could do these things without picking up the phone — or even looking at it?
Davide Bolchini (pictured above), Ph.D., an IU School of Informatics and Computing at IUPUI associate professor, has received a 3-year grant of $499,876 from the National Science Foundation’s Cyber-Human System program to explore these possibilities. Bolchini, who chairs the school’s Department of Human-Centered Computing, and specializes in accessible Human-Computer Interaction is principal investigator for the project, “Manipulating Text in Screenless Environments.”
Wearable and sound-based technologies, operating without traditional screen displays, can increase accessibility for people who are blind or have impaired vision. These technologies may become viable alternatives for other users as well.
“The goal of this project is to rethink our understanding of interacting with text from a screen-centric paradigm to screenless, aural environments.”
Davide Bolchini, chair, Department of Human-Centered Computing
“We live in a screen-centric world that captures most of our daily attention. This often conceals from us the possibility of realizing what our experience with information technology would look like without visual displays,” Bolchini says.
Building on over a decade of research in accessible computing—especially to understand how people who are blind may aurally navigate interactive systems like the web—Bolchini sees this project as the next step in an organic continuum of research exploring the design space of entirely auditory human-text interaction.
“We take for granted that visual displays are the only medium for experiencing information-rich, interactive applications,” he says.
“But this paradigm is very limiting.
“Typing, for example, is an activity that relies upon on-screen keyboards that still display characters visually. This presents a barrier for people who are blind—as well as for all users in situations where it is inconvenient or unsafe to hold a phone at all times to communicate.”
An exciting aspect of this work is studying the implications of transferring text-centric technology—such as the keyboard—from the visual-spatial dimension to one that is entirely temporal. It’s a delicate give-and-take proposition.
Bolchini’s research investigates how to free ourselves from screens and keypads, possibly using armbands or other wearable accessories. “A major unsolved challenge is how to manipulate text aurally and silently, in situations when voice input fails or breaks privacy boundaries,” he says. “This will allow us to unbind users from a visual display both for input and output, and experience auditory flows of information controllable by a suitable form of nimble, wearable input.”
“This research is expected to inform future technologies that can augment the ability of over 7 million blind people in the United States to perform text entry,” Bolchini says, “circumventing direct interaction with screen-based devices.”
Community partnership is an important part of human-centered computing research. Obtaining input from people who have trouble with screen-based prompts is essential to this project.
By leveraging prior studies he’s led on aural interfaces, Bolchini’s research will continue to engage people who are blind and visually impaired—from the Indiana School for the Blind, Bosma Enterprises, and Easterseals Crossroads—as well as IU students from underrepresented groups. Participants will co-create and evaluate new options to interact with text while bypassing the screen.
“My hope is that the project will contribute to shifting the way people think about accessible typing in eyes-free scenarios.”
Davide Bolchini
Bolchini’s latest research project will explore technology that moves with the user, and how to interact with devices via sound.
“The conceptual advances will be designed to work with current and future wearable input devices (hand/finger gestures supported by armband or smart rings),” he says, “and will inform the foundation for a new class of auditory keyboards untethered to a visual display.”
The project later will include evaluation studies with sighted and blind participants on system prototypes of auditory keyboards, to examine how people can aurally manipulate text.
These observations will prove invaluable in understanding how we may one day navigate a screenless environment to manipulate text, performing entirely aural typing, editing, and text browsing.
Acknowledgment
This material is based upon work supported by the National Science Foundation under Grant No. 1909845. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Media Contact
Joanne Lovrinic
jebehele@iu.edu
317-278-9208