Laboratoire Psychologie de la Perception Institut Neurosciences Cognition Université Paris Descartes Centre National de Recherche Scientifique
Home
Research
vision Vision
action Action
speech Speech
avoc AVoC
support support staff
People
Former Staff
Teaching
Publications
Ethics
Events
Practical

Calendar
Opportunities
Internships
Contracts
Platforms
Links

Baby Lab

Intranet
Different spatial representations guide eye and hand movements

Our visual system allows us to localize objects in the world and plan motor actions toward them. We have recently shown that the localization of moving objects differs between perception and saccadic eye movements (Lisi & Cavanagh, 2015), suggesting different localization mechanisms for perception and action. This finding however could reflect a unique feature of the saccade system rather than a general dissociation between perception and action. To disentangle these hypotheses, we compared object localization between saccades and hand movements. We flashed brief targets on top of double-drift stimuli (moving Gabors with the internal pattern drifting orthogonally to their displacement, inducing large distortions in perceived location and direction) and asked participants to point or make saccades to them. We found a surprising difference between the two types of movements: while saccades targeted the physical location of the flashes, pointing movements were strongly biased toward the perceived location (about 63% of the perceptual illusion). The same bias was found when pointing movements were made in open-loop conditions (without vision of the hand). These results indicate that dissociations are present between different types of actions (not only between action and perception), and that visual processing for saccadic eye movements differs from that for other actions. Since the position bias in the double-drift stimulus depends on a persisting influence of past sensory signals, we suggest that spatial maps for saccades might reflect only recent, short-lived signals, while the spatial representations supporting conscious perception and hand movements integrate visual input over longer temporal intervals.