What are you looking at? Effects of a Movable Tablet on gaze Direction Detection

Keywords
Loading...
Thumbnail Image
Issue Date
2012-08-23
Language
en
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
In 2007 Schrammel et al. did research on how well people could estimate the gaze location of an embodied agent. They found that participants misinterpreted the agents gaze location with errors up to thirty five centimeter. The agent they used consisted of an avatar displayed on a fixed monitor. We suspected due to previous literature that not being able to physically turn towards a point of interest might be one of the causes for these errors. Bailenson et al. showed in 2002 that participants derived information about a person’s gaze behavior from head movements. In our research, we looked at whether physical head like movements of a tablet could improve the gaze direction detection of that agent. We conducted an experiment similar to the one of Schrammel et al. in 2007, using a table on which participants had to locate the gaze location of an agent in front of them. We compared three scenario’s labelled static neck, dynamic neck and person. Static neck was a scenario in which participants had to locate the gaze direction of a face shown on a fixed tablet screen, like in the experiment of Schrammel et at. in 2002. Dynamic neck, in which a face was shown on a table’s screen which physically turned with the help of a robotic arm toward the gaze location. As last, the person scenario in which we seated a person in front of the participants so that they where face to face. We had 9 participants who all conducted 8 trails in each scenario. When looking at the distance between the targeted and the estimated gaze location, we found some significant differences between the scenarios. The error in distance was significantly smaller in the person scenario than in the static and dynamic neck scenario’s, showing that participants could estimate a gaze location of another person better when that person was sitting in front of them rather than being displayed on a screen. We could not find a significant difference between the two tablet conditions, though this might be due to the small number of participants. However, the effect was not significant participants estimated the gaze location in the dynamic neck scenario more consistently than in the static neck scenario, which suggests that physical moments of a tablet do improve a tablet’s ability to transfer a gaze location. Making a tablet movable seems to have potential in terms of improving the accuracy in which a gaze direction can be detected, though follow up research in this field is required to provide more conclusive results. Keywords: Gaze detection, videoconferencing, head-movements, embodiment.
Description
Citation
Faculty
Faculteit der Sociale Wetenschappen