Progress in both human visual neuroscience and computer vision are limited by the availability of representative visual data. However, currently available image and movie databases are not representative of typical first-person visual experience. This project will create the Visual Experience Database (VEDB), a database of over 240 hours of first-person video complete with eye- and head-tracking. We will record from people of diverse ages (5-70 years) across three geographically distinct sites as they engage in common, everyday activities such as shopping, eating, or walking.
Data from the VEDB will be made openly available following a 6 month embargo period. Meta-data including GPS tags, demographic data of the person recording each video, and the pixelwise content tags will be made available so that researchers can select the parts of the dataset that most interest them.
To faciliate use of the database, we are also writing open-access software for data recording and analysis. The software will be well-documented and designed with the goal of being highly accessible even to those with limited training in big data. We actively seek inter- and cross-disciplinary use and adaptation of the database and software. For more information, click the “For Researchers” link below.
Information for researchers interested in accessing the VEDB or software, requesting early access to the data, or collaboratin with VEDB researchers
Listing of publications and presentations by our team related to the VEDB.
Due to the ongoing covid-19 pandemic, we are currently unable to recruit participants for our study. We are actively developing protocols to permit safe interaction of participants with researchers in a manner that is consistent with the advice of public health experts. Check back soon for updated information.
Taken shortly before the start of the covid-19 pandemic, in the video below, postdoctoral research Kamran Binaee guides a participant through a pilot run of data collection and demonstratse the mobile eye tracking employed in this project. The green dot represents the estimated position of the particiant’s gaze (where they were looking). (Video has no sound.)