- Augmented Writing and Mental Health
- Cross-modality Information Retrieval and Accessibility
- Peripheral vision and Head-Mounted Display
I am always looking for motivated undergrads/masters who are ambitious about participating in ongoing research work. Interested students in conducting UCLA learn-by-research or a lab project in one of the above projects can send me an email to discuss further details. I am also open to any collaboration opportunities with other research labs and organizations.
PneuFetch: Supporting Blind and Visually Impaired People to Fetch Nearby Object via Light Haptic Cues
CHI 2020 Late Breaking Works [DOI]
We present PneuFetch, a light haptic cues based wearable device that supports blind and visually impaired (BVI) people to fetch nearby objects in an unfamiliar environment. In our design, we generate friendly, non-intrusive, and gentle presses and drags to deliver direction and distance cues on BVI user’s wrist and forearm. As a concept of proof, we discuss our PneuFetch wearable prototype, contrast it with past work, and describe a preliminary user study.
AuxiScope: Improving Awareness of Surroundings for People with Tunnel Vision
UIST 2019 Student Innovation Competition
Tunnel vision is the loss of peripheral vision with retention of central vision, resulting in a constricted circular tunnel like ﬁeld of vision. By age 65, one in three Americans have some form of vision impairing eye condition and may notice their side or peripheral vision gradually failing. Unfortunately, this loss of peripheral vision can greatly affect a person’s ability to live independently.We propose a robotic assistant for them to locate the objects and navigate to ﬁnd objects in the environments based on object detection techniques and multimodal feedback.
Novel interactions for Smart Speakers
2018 - 2019, Designing Wizard-of-Oz experiments to explore novel interactions for multi-model interaction and error correction.
Navigation towards Visually Impaired People
2018 - 2019, Conducting pilot studies to explore multi-modal navigation technologies for indoor space including corridor, stairs, door, etc., and outdoor space including crossing, sidewalk, etc.
CHI 2019 [DOI]
Interacting with a smartphone using touch and speech output is challenging for visually impaired people in public and mobile scenarios, where only one hand may be available for input (e.g. with the other one holding a cane) and privacy may not be guaranteed when playing speech output using the speakers. We propose EarTouch, an one-handed interaction technique that allows the users to interact with a smartphone touching the ear on the screen. Users hold the smartphone in a talking position and listen to speech output from ear speaker privately. EarTouch also brings us an step closer to the inclusive design for all users who may suffer from situational disabilities.
Tsinghua University 35th, 36th Challenge Cup, Second Prize, 2017-2018
The closest way to natural user interface may be building intelligent proxy which can support multi-modal interactions according to end-users' intentions. We take our first step towards improving the user experience of visually impaired smartphone users. Based on interviews and participatory design activities, we try to explore the proper roles of graphic/haptic/voice UI in this case and establish guidelines for designing multi-model user interface. The intelligent proxy serves as the control center for building the bridge crossing "The Gulf of Execution and Evaluation" and it automatically performed the tasks when it understands the users' intentions.
BrainQuake: Auxiliary diagnosis System for Epilepsy
In Tsinghua Medical School, 2018
An sEEG intelligent cloud processing system designed to provide a more effective means for epilepsy surgery planning and research on the pathogenesis of epilepsy. Proper visualization of the signal data could help neurosurgeons to better position the lesion and algorithm have the potential to dig out patterns of seizure from original data, which may be difficult for human to find.
SENTHESIA: Adding Synesthesia Experience to Photos
2018 GIX Innovation Competition Semi-Finalist
An integrated multi-modal solution for adding synesthesia experience to ordinary food photos. Using natural language processing and visual recognition, SENTHESIA learn from the comments and photos from these websites and more materials online, to extract proper and delicate description from the semantic space, and analysis the key elements of taste and sense experience. Visually impaired users can also benefit from vivid descriptions beyond photos. Proposed and designed the concept, now cooperating with an interdisciplinary team to implement prototype.
Facilitating Temporal Synchronous Target Selection through User Behavior Modeling
IMWUT 2019 [DOI]
Temporal synchronous target selection is an association-free selection technique: users select a target by generating signals (e.g., finger taps and hand claps) in sync with its unique temporal pattern. However, classical pattern set design and input recognition algorithm of such techniques did not leverage users' behavioral information, which limits their robustness to imprecise inputs. In this paper, we improve these two key components by modeling users' interaction behavior. We also tested a novel Bayesian, which achieved higher selection accuracy than the Correlation recognizer when the input sequence is short. The informal evaluation results show that the selection technique can be effectively scaled to different modalities and sensing techniques.
Tap-to-Pair: Associating Wireless Devices using Synchronous Tapping
IMWUT 2018 [DOI]
Currently, most wireless devices are associated by selecting the advertiser’s name from a list, which becomes less efficient and often misplaced. We propose a spontaneous device association mechanism that initiates pairing from advertising devices without hardware or firmware modifications. Users can then associate two devices by synchronizing taps on the advertising device with the blinking pattern displayed by the scanning device. We believe that Tap-to-Pair can unlock more possibilities for impromptu interactions in smart spaces.
Prevent Unintentional Touches on Smartphone
When interacting with smartphones, the holding hand may cause unintentional touches on the screen and disturb the interactions, which is annoying for users. We develop capacitive image processing algorithm to identify the patterns of unintentional touches, such as the process of touches "growing up" and the corresponding relationships among touches on the screen etc. Through mechanical and artificial test, our algorithm rejected 96.33% of unintentional touches while rejecting 1.32% of intentional touches.
AirEx: Portable Air Purification Machine
A self-contained breathing apparatus which is an effective solution for breathing pure air without wearing a mask. As Design Lead, cooperated with teammates from University of Washington and Tsinghua Universtiy, created a mobile air filtration system for air pollution which provides adjustable assisted inhalation.
Infusion+: Intelligent Infusion System
Intravenous infusion is an important part of nursing work and one of the most commonly used medical treatments in clinical treatment. For a long time, most hospitals and medical institutions have relied on manual operations. Our solution includes infusion controllers and management system. Based on infrared sensor and stepper motor, infusion controllers can detect and control the infusion speed. The management system communicate the infusion information (patient, drug, infusion speed, time etc.) with the controllers through wireless.
TouchU: An Any Door to Memory
An integrated multi-modal solution for enhancing immersive experience of photography and reflection
Tianjin University Email Registration Platform
Responsible for web interface design
Garden+: A Smart Watering for Flowers
Using Arduino and humidity sensor to automatically control water pump