AUXISCOPE

Improving Awareness of Surroundings for People with Tunnel Vision

People with peripheral vision loss/Tunnel Vision are facing challenges in their daily lives. We propose a robotic  assistant for them to locate the objects and navigate to find objects in the environments based on object detection techniques and multimodal feedback. 

 

  • What is Tunnel Vision? Why we care about it?

 

Tunnel vision is the loss of peripheral vision with retention of central vision, resulting in a constricted circular tunnel like field of vision. By age 65, one in three Americans have some form of vision impairing eye condition and may notice their side or peripheral vision gradually failing. Unfortunately, this loss of peripheral vision can greatly affect a person’s ability to live independently. The challenges of people with tunnel vision could be divided into two categories: 1) See, feel, hear the invisible outside their field of vision, e.g., finding objects in the environment; 2) get an enhanced perception inside their field of vision, e.g., reading books and accessing computing devices.

 

  • How does AuxiScope help? What is the role of Google Coral inside it?

 

AuxiScope explores the surroundings of the user by rotating its head and supports voice interactions to provide the information. At the same time, it uses arms to knock on the user’s shoulder to convey the left/right direction of the target; uses hands to generate vibrations to convey the distance of the target from the central vision. With computing capability, Google Coral serves as the brain of AuxiScope, running machine learning algorithms to detect the faces and objects then instructing its other body parts to knock, speak and vibrate to help the users to see, feel, hear the invisible outside their field of central vision.

 

  • Future Work for more elegant and practical solutions?

 

Although we provides cardboard glasses to stimulate the tunnel vision, we are aware of that sighted people could not exactly feel as the people with visual impairments feel. We built this prototype based on the provided materials. To move on, we need to actively engage our target users in the process of design, comparing different solutions including AR headsets, handheld or wearable devices. With this project, we hope to arise the awareness of tunnel vision, which could be a kind of situational impairment that everyone, including you and me, may also experience in one period of the life, e.g., when getting old.

with team members (Hsuan-Wei Fan, Yuki Tang, Jiahao Li) at UIST 2019 Student Innovation Competition

AUXISCOPE: Related Projects

  • PneuFetch: Supporting Blind and Visually Impaired People to Fetch Nearby Object via Light Haptic Cues (CHI 2020 Late Breaking Works)

📧Contact: violynne@ucla.edu

Latest News

Sept 16, 2022: Three works submitted to CHI.

Aug 2, 2022: Became a PhD Candidate, finally.

July 29, 2022: Hope I could spend more time on Neuromatch computational neuroscience course.

April 7, 2022: Two works submitted to UIST.

March 28, 2022: Started a course on Neural Signal Processing.

Sept 21 2021: Finished my first industry internship at Microsoft EPIC Research Group. So grateful.

April 23 2021: Attended 2021 CRA-WP Grad Cohort for Women. 

March 19 2021: Finished a course on Neural Networks and Deep Learning.

Feb 13 2021: Attended weSTEM conference. Such an inspiring experience!

Dec 18 2020: Finished a course on Computational Imaging.

Dec 12 2020: Three works accepted by CHI.

Nov 22 2020: My first time attending NAISys.

Sept 17 2020: Three works submitted to CHI.

June 20 2020: One work rejected by UIST.

May 6 2020: One work submitted to UIST.

March 20 2020: Finished a course on Bioelectronics.

Feb 7 2020: One work accepted by CHI LBW.

Dec 13 2019: Finished a course on Neuroengineering.

Dec 8 2019: One work rejected by CHI.

Oct 25 2019: One work accepted by IMWUT.

Oct 22 2019: One work presented at UIST SIC.

Sep 20 2019: One paper submitted to CHI.

Aug 15 2019: One paper submitted to IMWUT.

July 30 2019: My first time attending SIGGRAPH.

 

Collaboration

UCLA STAND Program

UCLA Disabilities and Computing Program

NLP Group @ Computer Science, UCLA

Laboratory for Clinical and Affective Psychophysiology @ Psychology, UCLA

ACE, Makeability, Make4all @ UW

Human-Computer Interaction Initiative @ HKUST

 Interaction Lab @ KAIST

 

Service

SIGCHI Accessibility Committee (2021 - )

UCLA ECE Faculty Recruitment Student Committee (2021)

Accessibility Co-chair (UIST 2020, 2021)

UCLA ECEGAPS Prelim Reform Committee (2020)

Publicity Co-Chair (ISS 2020)

Associate Chair (CHI LBWs 2020, 2022)

Reviewer (CHI, UIST, CSCW, MobileHCI, IMWUT, IEEE RO-MAN, ISS)

Student Volunteer (UIST 2019, 2020, NAISys 2020)

Volunteer at Beijing Volunteer Service Foundation and the China Braille Library (2018)

 

Teaching

ECE 209AS Human-Computer Interaction, UCLA (2019 Fall, 2020 Fall, 2022 Winter)

 

Honors & Awards

Selected for a SIGCHI Student Travel Grant, 2020

Selected to CRA-WP Grad Cohort for Women, 2020

Graduates with distinction & Outstanding Thesis Award , Tsinghua University 2019

Best Paper Honorable Mention Award (Top 5%), CHI 2019

National Scholarship (Top 1%), Ministry of Education of the People’s Republic of China, 2018

Second Prize, Tsinghua University 35th Challenge Cup, 2018

Comprehensive Scholarship (Top 4%), Tsinghua University, 2017

First Prize, GIX Innovation Competition, 2016

Outstanding Thesis Award, Tianjin University, 2015

  

Invited Talks

"Inclusive Design: Accessibility Ignites Innovation" at TEDxTHU, 2018

 

Selected Press

TechCrunch: Alibaba made a smart screen to help blind people shop and it costs next to nothing

The Next Web: Alibaba’s inexpensive smart display tech makes shopping easier for the visually impaired 

Techengage: Alibaba's Smart Touch is everything for the visually impaired

Google’s AI hardware, in new hands