Hi! I am Ruolin Wang (Violynne), a Research Assistant and Ph.D. student in the Human-Computer Interaction Research Group at University of California, Los Angeles, advised by Prof. Xiang 'Anthony' Chen. Prior to joining UCLA, I worked in the Tsinghua Pervasive HCI Group . I received my M.Sc degree in Computer Science from Tsinghua University and BEng degree in Microelectronics from Tianjin University.
The next generation of HCI will witness the intertwined evolution of human and machine. Machines will be augmented by extending sensing abilities, interconnecting with the environment and learning from biological mechanism such as neural systems; human will be augmented by embracing more possibilities of interacting with the world and having machines intervened in better understanding the human body, physical and mental state. My career blueprint as an Interdisciplinary Researcher is working at the intersection of Sensing, AI and Neuroscience, augmenting the abilities of both digital devices and humankind to provide seamless experience and improve wellbeing. By creating novel interaction techniques and deploying elegant solutions for Real-World users, I pursue intellectual impact as well as practical values. [CV]
Since June 2017, I have devoted most of my passion and energy to building assistive technologies toward blind and visually impaired (BVI) people. Adopting a perspective of ability-based design, we can make the technologies accessible for a much wider range of users. In a broader sense, I see the potential of augmentation beyond accessibility. To be specific, my works focus on:
Novel Interaction + Capacitive Sensing
Interaction Proxy + Fabrication
Information Retrieval + Q&A System
Computer Vision + Multi-model Feedback
Ruolin Wang, Chun Yu, Xing-Dong Yang, Weijie He, Yuanchun Shi ( CHI 2019, Best Paper Honorable Mention 🏅)
Interacting with a smartphone using touch and speech output is challenging for visually impaired people in public and mobile scenarios, where only one hand may be available for input (e.g. with the other one holding a cane) and privacy may not be guaranteed when playing speech output using the speakers. EarTouch is a one-handed interaction technique that allows the users to interact with a smartphone by touching the ear on the screen. Users hold the smartphone in a talking position and listen to speech output from ear speaker privately. Eight ear gestures were deployed to support seven common tasks including answering a phone call, sending a message, map navigation, etc. EarTouch also brings us a step closer to the inclusive design for all users who may suffer from situational disabilities.
The potential of machines and AI in helping human to lead wellbeing lives is still to be explored.
Mental Health + AI
Epilepsy + ML
Wearable Filtration System + Mobile App
Infrared Sensing + Database Management
"The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it."
Wireless Sensing for Device Association
MultiModal User Behavior Modeling
Fluidic Interfaces 🌊
Is our brain smart enough to understand the brain? My curiosity for brain is always my driving force for doing research. I am interested in: