Hi! I am Violynne (Ruolin) Wang, a 1st year Ph.D. student at the Human-Computer Interaction Research Group at University of California, Los Angeles, advised by Prof. Xiang 'Anthony' Chen. Prior to joining UCLA, I worked at Tsinghua HCI lab.
I am interested in augmenting the abilities of both digital devices and humankind to provide seamless experience and improve wellbeing. My career blueprint as an Interdisciplinary Engineer is to work at the intersection of Sensing, AI and Neuroscience. By creating novel interaction techniques and deploying elegant solutions for Real-World users, I pursue intellectual impact as well as practical values. [CV]
How human interact with the world/others depends on what kind of information can be processed and how the information is processed. The fundamental level is perception based on senses (human) / sensing (machine). The higher level is cognition based on neuroscience (human) / AI (machine). The next generation of HCI should witness the intertwined evolution of human and machine. Machines will be augmented by extending sensing abilities, interconnecting with the environment and learning from biological mechanism such as neural systems; human will be augmented by embracing more possibilities of interacting with the world and having machines intervened in better understanding the human body, physical and mental state.
Since June 2017, I have devoted most of my passion and energy to building assistive technologies toward blind and visually impaired people. In a broader sense, I see the potential of augmentation beyond accessibility. Adopting a perspective of ability-based design, we can make the technologies accessible for a much wider range of users.
Novel Interaction + Capacitive Sensing
Interaction Proxy + Fabrication
Information Retrieval + Q&A System
Computer Vision + Multi-model Feedback
EarTouch: Facilitating Smartphone Use for Visually Impaired People in Public and Mobile Scenarios
Ruolin Wang, Chun Yu, Xing-Dong Yang, Weijie He, Yuanchun Shi ( CHI 2019, Best Paper Honorable Mention 🏅)
Interacting with a smartphone using touch and speech output is challenging for visually impaired people in public and mobile scenarios, where only one hand may be available for input (e.g. with the other one holding a cane) and privacy may not be guaranteed when playing speech output using the speakers. We propose EarTouch, a one-handed interaction technique that allows the users to interact with a smartphone touching the ear on the screen. Users hold the smartphone in a talking position and listen to speech output from ear speaker privately. EarTouch also brings us a step closer to the inclusive design for all users who may suffer from situational disabilities.
The potential of machines in helping human to lead wellbeing lives is still to be explored no matter in professional level or personal level.
Cognition Enhancement + AI
Epilepsy + ML
Wearable Filtration System + Mobile App
The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.
Wireless Sensing + User Behavior Modeling
Fluidic Interfaces 🌊
Is our brain smart enough to understand the brain? My curiosity for neuroscience is always my driving force for doing research.
Blending of the senses