Georeferencing eye tracking data on interactive cartographic products
ISBN 978-85-88783-11-9
Authors
1Ooms, K.; 2de Maeyer, P.
1GHENT UNIVERSITY Email: kristien.ooms@ugent.be
2GHENT UNIVERSITY Email: philippe.demaeyer@ugent.be
Abstract
Current state-of-the-art eye tracking systems have limited automated solutions to deal with the analysis of interactive stimuli. Users’ gaze locations (or Points of Regard, POR), are typically recorded in screen coordinates (e.g. pixel locations in a display) and not in geographic coordinates, which induces a spatial data analysis challenge when evaluating interactive cartographic products. Nevertheless, the viewed geographic locations might be particularly relevant for a specific spatial decision making task. Interactive maps in user studies are often approximated by pre-computed animations or by automatically loading a number of subsequent static images. In doing so, the experimenter introduces a high level of experimental control to facilitate empirical data analysis with dynamic displays. To increase ecological validity, however, participants should be able to execute a task on interactive maps as they would normally do ( which is: without restricting their inference making behaviour or the interactivity levels of the tested map display). Other solutions - e.g. segmenting screen recordings based on user action, Dynamic Areas of Interest, Semantic Gaze Mapping, etc. – typically demand a high amount of time consuming manual work, which decreases their attractiveness in interactive cartographic user studies. To evaluate interactive cartographic products, it is essential that human-map interactions are tracked as well. In User Centred Design (UCD), user-system interaction logging (e.g. mouse movements, key-stroke analyses, etc.) is often utilized to gather quantitative data from users who execute a task with a product, and this has also been linked with eye tracking on interactive applications. For geographic analyses, this collected data should ideally be represented by means of map or geographic coordinates. By combining the gathered data (initial settings of the interactive map, eye movements and user actions in pixel coordinates), all recorded eye movements can be recalculated and thus placed on their corresponding geographic location. Using this methodology, a user study was conducted in which participants had to execute four tasks in GoogleMaps (only by means of a panning operation). Two of these tasks consisted of following a route in Belgium on a detailed scale level (level 13); in the two other tasks the participants had to locate Belgium on a less detailed map (scale level 7). Furthermore, an alternation between the map- and satellite-view was implemented. From this study, one can derive how participants use the panning operation and the influence of the latter on (their) cognitive processes (fixation duration). By recalculating the recorded eye movement data to their corresponding geographic position, the location of the fixations could be evaluated as well: location on the screen and on the map. Moreover, the recalculated eye movements were imported in a GIS for spatial analyses such as buffers, calculating overlaps between participants, etc. In this phase of the research, the zooming operation has not yet been included. Only this way, we could examine the actual influence of the panning operation on the map user can be investigated. In a next phase, the zooming operation will be investigated and finally, all will be integrated in a concluding experiment.
Keywords
Interactive maps; Eye tracking; User study