Refining Electrooculography Depth Priors with Motion Parallax*
Session Number
1
Advisor(s)
Dr. Ashwin Mohan, IMSA SYNAPSE Lab
Location
A150
Discipline
Computer Science
Start Date
15-4-2026 10:15 AM
End Date
15-4-2026 11:00 AM
Abstract
Depth estimation for wearable systems is of significant interest in fields such as extended reality, robotics, and human-computer interaction due to its central role in enabling spatial understanding. However, traditional eye tracking methods are either expensive, uncomfortable, or invasive, making them impractical for wearable applications. Advances in electrooculography (EOG) have made lightweight and non-invasive eye tracking increasingly practical, but current EOG depth estimation methods relying solely on vergence cues are prone to noise and drift. Motion parallax is a cue that determines relative depth based on the ratio between retinal image velocity and pursuit eye velocity. We propose the novel addition of motion parallax to EOG depth estimation as a cue that updates depth priors derived from vergence. To facilitate integration, we use an electrode configuration consistent with established practices to compute both vergence and motion parallax cues, and we statistically fuse these measurements across time. The resulting model produces per-frame depth estimates with associated uncertainty. We evaluate the approach across controlled viewing distances and simulated motion parallax stimuli to measure its accuracy and robustness to EOG drift.
Refining Electrooculography Depth Priors with Motion Parallax*
A150
Depth estimation for wearable systems is of significant interest in fields such as extended reality, robotics, and human-computer interaction due to its central role in enabling spatial understanding. However, traditional eye tracking methods are either expensive, uncomfortable, or invasive, making them impractical for wearable applications. Advances in electrooculography (EOG) have made lightweight and non-invasive eye tracking increasingly practical, but current EOG depth estimation methods relying solely on vergence cues are prone to noise and drift. Motion parallax is a cue that determines relative depth based on the ratio between retinal image velocity and pursuit eye velocity. We propose the novel addition of motion parallax to EOG depth estimation as a cue that updates depth priors derived from vergence. To facilitate integration, we use an electrode configuration consistent with established practices to compute both vergence and motion parallax cues, and we statistically fuse these measurements across time. The resulting model produces per-frame depth estimates with associated uncertainty. We evaluate the approach across controlled viewing distances and simulated motion parallax stimuli to measure its accuracy and robustness to EOG drift.