The dataset consists of 1430 samples. Samples were recorded for 34 subjects. Every sample consists of eye movement recordings registered when a person observed an image. All measurements were made using a head mounted JazzNovo eye tracker registering eye positions with 1kHz frequency. In every case the image observed was a photograph of a face. Every subject looked at several (20 to 50) different photographs. The task for participants was to observe the photographs and decide if they know the face by pressing yes/no button. The participants were not limited in time so every observation length could be different. It lasted from 891 ms to 22012 ms in our dataset.
The dataset downloadable for the competition is available in CSV format (a text file with one line for every sample). Every line is a list of comma separated elements as follows:
sid,known,x1,y1,x2,y2, ... xn,ynwhere:
sid
- subject identifier (sXX)
known
- decision of the subject concerning the observed image (true/false).
xi
- the i-th value of the recorded horizontal eye gaze point.
yi
- the i-th value of the recorded vertical eye gaze point.
The values are 0 for point in the middle, positive for point on the right or lower side of the screen and negative for points on the left or upper side of the screen.
The number of values differs for every sample!
We make two datasets available. The training dataset consists of 837 labeled samples, the testing set
consists of 593 unlabeled samples. The task for participant is to guess a correct identification of samples.
The distribution of samples is more or less even in both training and testing datasets but not every
subject appearing in the training dataset appears also in the testing dataset.
'train.csv'
file holds samples with subject ids, 'test.csv'
file holds samples with '?' in the place of sid.
See Competition formula for details.
Please register (using Registration link) and you should be able to download the datasets from here.