Intelligent Multi-modal Proxy for Visually Impaired Smartphone Users
The closest way to natural user interface may be building intelligent proxy which can support multi-modal interactions according to end-users' intentions. We take our first step towards improving
the user experience of visually impaired smartphone users. Based on interviews and participatory design activities, we try to explore the proper roles of graphic/haptic/voice UI in this case and
establish guidelines for designing multi-model user interface. The intelligent proxy serves as the control center for building the bridge crossing "The Gulf of Execution and Evaluation" and it
automatically performed the tasks when it understands the users' intentions.