All ETDs from UAB

Advisory Committee Chair

Lei Liu

Advisory Committee Members

Eugene Bourquin

Patti Fuhr

Roderick Fullard

Michael Loop

Nancy Mascia

Document Type


Date of Award


Degree Name by School

Doctor of Philosophy (PhD) School of Optometry


TRANSFERRING VIRTUAL REALITY TRAINING TO REAL WORLD SETTINGS IN INDIVIDUALS WITH LOW VISION AND DUAL-SENSORY IMPAIRMENTS ELLEN LAMBERT BOWMAN VISION SCIENCE GRADUATE PROGRAM ABSTRACT The goal of this study was to determine if individuals with low vision could learn useful visual skills in Virtual Reality and could transfer these skills to the real world. To achieve this goal, one set of Orientation & Mobility (O&M) skills, using the near lane parallel traffic surge to decide the safest timing to cross a signal controlled street, was selected. Twelve participants with vision too poor to use pedestrian signals were randomly assigned to learn these skills in computer generated virtual streets (VST group, n=8) and in real streets (RST group, n=4) from a certified O&M specialist. Before and after training, the safety of all participants’ street crossing decisions was evaluated in real streets by asking them to say “GO” at the moment when they felt safest to cross the street. The timing of “GO” in the pedestrian signal cycle was recorded by two testers using stopwatches and was converted to a safety score. The outcome measure of the research was the difference in before and after training safety scores. Before training, the VST group made totally unsafe, somewhat unsafe and safe crossing decisions at 65%, 18% and 17% of the time. After learning the O&M skills in virtual streets, the participants made totally unsafe, somewhat unsafe and safe crossing decisions at 3%, 6% and 91% of the time. The similar shift from very unsafe to very safe crossing decisions was also observed in the RST group. A repeated ANOVA with TRAINING (pre and post) as the within-subject variable and GROUP (VST and RST) as the between-subject variable showed a significant TRAINING effect (F1,10 = 288.3, p<0.0005), an insignificant GROUP effect (F1,10 = 0.821, p=0.386), and an insignificant TRAINING*GROUP interaction (F1,10 = 2.80, p=0.125). This research demonstrated that individuals with severely impaired vision can learn important O&M skills in virtual streets and can apply the skills to solve real world problems. The training effect from virtual streets is as strong as that from real streets. These results provide the first evidence that virtual reality is a viable platform to train visual skills to patients with low vision.

Included in

Optometry Commons



To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.