Publications


ActivEye - Challenges in large scale eye-tracking for active participants Workshop

Postdoctoral trainee Dr. Binaee organized a workshop at the 2021 ACM Symposium on Eye Tracking Research and Applications (ETRA) conference called ActivEye - Challenges in large scale eye-tracking for active participants. Members of the VEDB research team presented four papers at the workshop.

Pupil Tracking Under Direct Sunlight

Binaee K, Sinnott C, Capurro K, MacNeilage P, and Lescroart, M. May, 2021 ETRA ActiveEye workshop https://doi.org/10.1145/3450341.3458490

Characterizing the performance of Deep Neural Networks for eye-tracking.

Biswas A, Binaee K, Capurro K, and Lescroart, M. May, 2021 ETRA ActiveEye workshop https://doi.org/10.1145/3450341.3458491

VEDBViz: The Visual Experience Database Visualization and Interaction Tool

Ramanujam S, Sinnott C, Shankar B, Halow S, Szekely B, Binaee K, and MacNeilage P. May, 2021 ETRA ActiveEye workshop https://doi.org/10.1145/3450341.3458486

Ergonomic Design Development of the Visual Experience Database Headset

Shankar B, Sinnott C, Binaee K, Lescroart M, and MacNeilage P. May, 2021 ETRA ActiveEye workshop https://doi.org/10.1145/3450341.3458487



Positional head-eye tracking outside the lab: an open-source solution.

Peter Hausamann, Christian Sinnott, and Paul R. MacNeilage. ETRA ’20: Symposium on Eye Tracking Research and Applications June 2020 Article No.: 14 Pages 1–5. https://doi.org/10.1145/3379156.3391365

Abstract Simultaneous head and eye tracking has traditionally been confined to a laboratory setting and real-world motion tracking limited to measuring linear acceleration and angular velocity. Recently available mobile devices such as the Pupil Core eye tracker and the Intel RealSense T265 motion tracker promise to deliver accurate measurements outside the lab. Here, the researchers propose a hard- and software framework that combines both devices into a robust, usable, low-cost head and eye tracking system. The developed software is open source and the required hardware modifications can be 3D printed. The researchers demonstrate the system’s ability to measure head and eye movements in two tasks: an eyes-fixed head rotation task eliciting the vestibulo-ocular reflex inside the laboratory, and a natural locomotion task where a subject walks around a building outside of the laboratory. The resultant head and eye movements are discussed, as well as future implementations of this system.


Head motion predictability explains activity-dependent suppression of vestibular balance control.

H Dietrich, F Heidger, R Schniepp, PR MacNeilage, S Glasauer, M Wuehr. Sci Rep. 2020 Jan 20;10(1):668. http://doi.org/10.1038/s41598-019-57400-z PMID: 31959778
Abstract Vestibular balance control is dynamically weighted during locomotion. This might result from a selective suppression of vestibular inputs in favor of a feed-forward balance regulation based on locomotor efference copies. The feasibility of such a feed-forward mechanism should however critically depend on the predictability of head movements (HMP) during locomotion. To test this, we studied in 10 healthy subjects the differential impact of a stochastic vestibular stimulation (SVS) on body sway (center-of-pressure, COP) during standing and walking at different speeds and compared it to activity-dependent changes in HMP. SVS-COP coupling was determined by correlation analysis in frequency and time domains. HMP was quantified as the proportion of head motion variance that can be explained by the average head trajectory across the locomotor cycle. SVS-COP coupling decreased from standing to walking and further dropped with faster locomotion. Correspondingly, HMP increased with faster locomotion. Furthermore, SVS-COP coupling depended on the gait-cycle-phase with peaks corresponding to periods of least HMP. These findings support the assumption that during stereotyped human self-motion, locomotor efference copies selectively replace vestibular cues, similar to what was previously observed in animal models.


Presentations


The Visual Experience Database: a large-scale point-of-view database for vision research

Kamran Binaee, Christian Sinnott, Paul MacNeilage, Mark Lescroart, Benjamin Balas, Michelle R. Greene
32nd Symposium: Active Vision Symposium at the Center for Visual Science, University of Rochester June 2-5, 2021.
Poster presentation by Kamran Binaee
Note: This conference has been rescheduled from June, 2020 due to the Covid-19 pandemic

Abstract: Currently available image and movie datasets are not representative of the natural statistics of our visual experience. To address this issue, we are creating the Visual Experience Database, consisting of over 240 hours of first-person video with eye- and head-tracking. During data collection participants of diverse ages (5-70 years), and across three geographically different sites engage in common, everyday activities. Hence, we will prevent over-representation of any particular region, environment, or task. We modified the Pupil-labs eye-tracking hardware with a head tracking module and a FLIR camera for a wider field of view and higher resolution. We present our open-source software for recording the data-stream from different devices and share the results for pilot test during system development, data quality assessments, and tradeoffs, e.g. between image quality and file size. We also present the data collection protocol, including calibration and validation of head and eye movement measurements.

 
The creation of the VEDB is supported by an NSF EPSCoR Research Infrastructure Improvement Program: Track-2 Focused EPSCoR Collaborations grant (award #1920896).