You are on page 1of 251

CHEMISE 31/10/06 12:43 Page 1

ENACTIVE /06
Enaction & Complexity

Proceedings

Third International Conference


on Enactive Interfaces
Montpellier (France) : November 20-21, 2006
3rd International Conference on Enactive Interfaces
ENACTIVE / 06

ENACTION & COMPLEXITY

November 20-21, 2006

Montpellier, France

PROCEEDINGS
2
3rd International Conference on Enactive Interfaces (Enactive /06) 3

Table of Contents

Introduction 9
Conference Committees 11
Organizing Committee 11
Review Committee 12

KEYNOTES 13
Enactive perception: Sensorimotor expectancies or perception-action invariants? 15
William H. Warren
Understanding manipulation intelligence by observing, transfering and robotizing it 17
Yasuyoshi Yokokohji

THEMATIC SYMPOSIA 19
Enaction and the concept of Presence 21
Organizers: David Benyon and Elena Pasquinelli
Enaction and the concept of Presence 22
David Benyon & Elena Pasquinelli
David Benyon 24
Wijnand Ijsselsteijn 25
George Papagiannakis 26
Thomas A. Stoffregen 27

Interpersonal Enactive Interaction 29


Organizers: Richard Schmidt and Ludovic Marin
Interpersonal Enactive Interaction 30
Richard Schmidt & Ludovic Marin
Spontaneous synchronization and social memory in interpersonal coordination dynamics 31
Olivier Oullier, Gonzalo C. de Guzman,, Kelly J. Jantzen, Julien Lagarde & J. A. Scott Kelso
Consequences of dance expertise on interpersonal interaction 33
Johann Issartel, Ludovic Marin & Marielle Cadopi
Visual and verbal informational constraints on interpersonal coordination 35
Michael J. Richardson
Perception of an intentional subject: An enactive approach 37
Charles Lenay, Malika Auvray, Francois-David Sebbah & John Stewart

Enaction, complexity, and multimodal HCI: Experimental, theoretical, and epistemological


approaches 39
Organizers: Armen Khatchatourov and Julien Lagarde
A dynamical foundation of directional relationships in multimodal environments 40
Julien Lagarde & Benoît Bardy
4
Multisensory integration and segregation are based on distinct large-scale brain dynamics 42
Viktor K. Jirsa
Are usages computable? 44
Bruno Bachimont
Some epistemological considerations on relation between cognitive psychology and computer
interfaces (on the example of tactile-proprioception synergy) 45
Armen Khatchatourov & Annie Luciani

FREE TALKS 47
Role of the inertia tensor in kinaesthesia of a multi-joint arm reaching movement 49
Delphine Bernardin, Brice Isableu, Benoit Bardy & Paul Fourcade
Considering the normalized vertical reach performance for consistently controlling virtual
manniquins from full-body input 51
Ronan Boulic, Damien Maupu & Daniel Thalmann
Computer technology and enactive assumption 53
Dominique Dionisi & Jacques Labiche
Tactile-force-feedback integration as an exemplary case for the sense of touch in VE. A new
T-FFD device to explore spatial irregularities. 57
Jean-Loup Florens, Armen Khatchatourov, Charles Lenay, Annie Luciani &
Gunnar Declerck
Adaptive acquisition of enactive knowledge 59
Wai-Tat Fu
Linking Perception and Action: A Task of Attention? 61
Nivedita Gangopadhyay
Dyadic postural activity as a function of support surface rigidity 63
Marc Russell Giveans, Kevin Shockley & Thomas Stoffregen
Anthropology of perception: sand drawings, body paintings, hand signs and ritual enaction
among Indigenous Australians 65
Barbara Glowczewski
Why perspective viewing of electronic documents should be allowed in the multi-purpose
graphical user interface 69
Yves Guiard, Yangzhou Du, Olivier Chapuis & Michel Beaudouin-Lafon
Computer/human structural coupling for data interpretation 71
Thomas Guyet, Catherine Garbay & Michel Dojat
Action for perception : influence of handedness in visuo-auditory sensory substitution 73
Sylvain Hanneton & Claudia Munoz
A virtual haptic-audio line drawing program 75
Charlotte Magnusson, Kirsten Rassmus-Gröhn & Håkan Eftring
Enaction approached by Tao and physiology 77
Nancy Midol & Weiguo Hu
Haptic exploration of buildings for visually impaired persons: Show me the map 79
Luděk Pokluda & Jiří Sochor
Using a scripting engine to facilitate the development of force fields in a distributed haptic setup 81
Chris Raymaekers, Tom De Weyer & Karin Coninx
The theatrical work of art: a collection of man-machine interactions? 83
Francis Rousseaux
3rd International Conference on Enactive Interfaces (Enactive /06) 5
Haptic interfaces for collaborative learning among sighted and visually impaired pupils in
primary school 85
Eva-Lotta Sallnäs, Jonas Mol & Kerstin Severinson Eklundh
Multimodal Interfaces for rehabilitation: applications and trend 87
Elisabetta Sani, Carlo Alberto Avizzano, Antonio Frisoli & Massimo Bergamasco
Dynamical and perceptual constraints on interpersonal coordination 89
Richard Schmidt
Auditory navigation interface featured by acoustic sensitivity common to blind and sighted
people 91
Takayuki Shiose, Shigueo Nomura, Kiyohide Ito, Kazuhiko Mamada, Hiroshi Kawakami &
Osamu Katai
Stabilization of posture relative to audio referents 93
Giovanna Varni, Thomas Stoffregen, Antonio Camurri, Barbara Mazzarino &
Gualtiero Volpe
Checking the two-third power law for shapes explored via a sensory substitution device 95
Mounia Ziat, Olivier Gapenne, John Stewart, Charles Lenay, Mehdi El Yacoubi &
Mohamed Ould Mohamed

POSTERS 97
A classification of video games deduced by a pragmatic approach 99
Julian Alvarez, Damien Djaouti, Rashid Ghassempouri, Jean-Pierre Jessel & Gilles Méthel
Perceptive strategies with an enactive interface 101
Yusr Amamou & John Stewart
Dissociating perceived exocentric direction and distance in virtual space 103
José Aznar-Casanova, Elton Matsushima, José Da Silva & Nilton Ribeiro-Filho
Spatio-temporal coordination during reach-to-grasp movement 105
Patrice Bendahan & Philippe Gorce
Discrimination of tactile rendering on virtual surfaces 107
Alan C. Brady, Ian R. Summers, Jianguo Qu & Charlotte Magnusson
Accelerating object-command transitions with pie menus 109
Jérôme Cance, Maxime Collomb & Mountaz Hascoët
Color Visual Code: An augmented reality interface for mobile phone 111
Da Cao, Xubo Yang & Shuangjiu Xiao
Wearable automatic biometric monitoring system for massive remote assistance 113
Sylvain Cardin, Achille Peternier, Frédéric Vexo & Daniel Thalmann
Improving drag-and-drop on wall-size displays 115
Maxime Collomb & Mountaz Hascoët
Measurements on Pacinian corpuscles in the fingertip 117
Natalie L. Cornes, Rebecca Sulley, Alan C. Brady & Ian R. Summers
Influence of the manual reaching preparation movement on visuo-spatial attention during
a visual research task 119
Alexandre Coutté & Gérard Olivier
Finding the visual information used in driving around a bend: An experimental approach 121
Cécile Coutton-Jean, Daniel Mestre & Reinoud J. Bootsma.
Does Fitts' law sound good? 123
Amalia de Götzen, Davide Rocchesso & Stefania Serafin
6
Fractal models for uni-manual timing control 125
Didier Delignières, Kjerstin Torre & Loïc Lemoine
Modification of the initial state of the motor system alters movement prediction 127
Laurent Demougeot & Charalambos Papaxanthis
Plantar pressure biofeedback device for foot unloading: Application to the diabetic foot 129
Aurélien Descatoire, Virginie Femery & Pierre Moretto
Oscillating an object under inertial or elastic load: The predictive control of grip force
in young and old adults 131
Médéric Descoins, Frédéric Danion & Reinoud J. Bootsma
Expressive audio feedback for intuitive interfaces 133
Gianluca D’Incà & Luca Mion
The Mathematical theory of integrative physiology: a fundamental framework for human
system integration and augmented human design 135
Didier Fass & Gilbert Chauvet
Learning a new ankle-hip pattern with a real-time visual feedback: Consequences on
preexisting coordination dynamics 137
Elise Faugloire, Benoît G. Bardy & Thomas A. Stoffregen
Influence of task constraints on movement and grip variability in the development of
spoon-using skill 139
Paula Fitzpatrick
Exploratory movement in the perception of affordances for wheelchair locomotion 141
Moira B. Flanagan, Thomas A. Stoffregen & Chih-Mei Yang
Posture and the amplitude of eye movements 143
Marc Russell Giveans, Thomas Stoffregen & Benoît G. Bardy
Motor learning and perception in reaching movements of flexible objects with high and low
correlations between visual and haptic feedbacks 145
Igor Goncharenko, Mikhail Svinin, Yutaka Kanou & Shigeyuki Hosoe
Kinesthetic steering cues for road vehicle handling under emergency conditions 147
Arthur J. Grunwald
Creating prototypes of prospective user activities and interactions through acting by design
team and users 149
Kimitake Hasuike, Eriko Tamaru & Mikio Tozaki
A couple of robots that juggle balls: The complementary mechanisms for dynamic coordination 151
Hiroaki Hirai & Fumio Miyazaki
Perception of multiple moving objects through multisensory-haptic interaction: Is haptic so
evident for physical object perception? 153
Maxime Houot, Annie Luciani & Jean-Loup Florens
SHAKE – Sensor Hardware Accessory for Kinesthetic Expression 155
Stephen Hughes & Sile O’Modhrain
Experimental stability analysis of a haptic system 157
Thomas Hulin, Jorge Juan Gil, Emilio Sánchez, Carsten Preusche & Gerd Hirzinger
Stabilization of task variables during reach-to-grasp movements in a cluttered environment 159
Julien Jacquier-Bret, Nasser Rezzoug & Philippe Gorce
CoMedia: Integrating context cues into mobile group media for spectators 161
Giulio Jacucci, Antti Salovaara, Antti Oulasvirta, Tommi Ilmonen & John Evans
Motor images dynamics: an exploratory study 163
Julie Jean, Lorène Delcor & Michel Nicolas
3rd International Conference on Enactive Interfaces (Enactive /06) 7
Exploring the boundaries between perception and action in an interactive system 165
Manuela Jungmann, Rudi Lutz, Nicolas Villar, Phil Husbands & Geraldine Fitzpatrick
Interaction between saccadic and smooth pursuit eye movements: first behavioural evidences 167
Stefano Lazzari, Benoit Bardy, Helena Grillon & Daniel Thalmann
The effect of discontinuity on timing processes 169
Loïc Lemoine, Kjerstin Torre & Didier Delignières
The effects of aging on road-crossing behavior: The interest of an interactive road crossing
simulation 171
Régis Lobjois, Viola Cavallo & Fabrice Vienne
Simulating a virtual target in depth through the invariant relation between optics, acoustics
and inertia 173
Bruno Mantel, Luca Mion, Benoît Bardy, Federico Avanzini & Thomas Stoffregen
Visual cue enhancement in the vicinity of the tangent point can improve steering in bends 175
Franck Mars
Improving human movement recovery using qualitative analysis 177
Barbara Mazzarino, Manuel J. Peinado, Ronan Boulic, Marcelo Wanderley & Antonio
Camurri
Enactive theorists oo it on purpose: Exploring an implicit demand for a theory of goals 179
Marek McGann
Visual anticipation of road departure 181
Isabelle Milleville-Pennel & Aurélia Mahé
Dynamics of visual perception during saccadic programming 183
Anna Montagnini & Eric Castet
Motor performance as a reliable way for tracking face validity of Virtual Environments 185
Antoine Morice, Isabelle Siegler & Benoît Bardy
Evaluation of a motor priming device to assist car drivers 187
Jordan Navarro, Frank Mars & Jean-Michel Hoc
Some experimental data on oculomotor intentionality 189
Gérard Olivier
Bodily simulated localization of an object during a perceptual decision task 191
Gérard Olivier & Sylvane Faure
Inscribing the user’s experience to enact development 193
Magali Ollagnier-Beldame
Enactive perception and adaptive action in the embodied self 195
Adele Pacini, Phil Barnard & Tim Dalgeleish
Presence and Interaction in mixed realities 197
George Papagiannakis, Arjan Egges & Nadia Magnenat-Thalmann
Enactic applied to sea state simulation 199
Marc Parenthoën & Jacques Tisseau
The role of expectations in the believability of mediated interactions 201
Elena Pasquinelli
Exploring full-body interaction with environment awareness 203
Manuel Peinado, Ronan Boulic, Damien Maupu, Daniel Meziat & Daniel Thalmann
Mimicking from perception and interpretation 205
Catherine Pelachaud, Elizabetta Bevacqua, George Caridakis, Kostas Karpouzis,
Maurizio Mancini, Christopher Peters & Amaryllis Raouzaiou
8
Posting real balls through virtual holes 207
Gert-Jan Pepping
Towards space concept integration in navigation tools 209
Edwidge. E. Pissaloux, Flavien Maingreaud, Eléanor Fontaine & Ramiro Velazquez
Synchronisation in anticipative sensory-motor schemes 213
Jean-Charles Quinton, Christophe Duverger & Jean-Christophe Buisson
Motor and parietal cortical areas both underlie kinaesthesia 215
Patricia Romaiguère, Jean-Luc Anton, Laurence Casini & Jean-Pierre Roll
Evaluation of a haptic game in an immersive environment 217
Emanuele Ruffaldi, Antonio Frisoli & Massimo Bergamasco
Manipulability map as design criteria in systems including a haptic device 221
Jose San Martin & Gracian Trivino
The roles of vision and proprioception in the planning of arm movements 223
Fabrice Sarlegna & Robert Sainburg
Choreomediating kinaesthetic awareness and creativity 225
Gretchen Schiller
Grasping affordances 227
Joanne Smith & Gert-Jan Pepping,
An innovative portable fingertip haptic device 229
Massimiliano Solazzi, Antonio Frisoli, Fabio Salsedo & Massimo Bergamasco
Stability of in-phase and anti-phase postural patterns with hemiparesis 231
Deborah Varoqui, Benoît G. Bardy, Julien Lagarde & Jacques-Yvon Pélissier
Dynamical principles of coordination reduce complexity: the example of Locomotor
Respiratory Coupling 233
Sébastien Villard, Denis Mottet & Jean-François Casties
Simulation validation and accident analysis in operator performance 235
Michael G. Wade & Curtis Hammond
Catching and judging virtual fly balls 237
Frank T. J. M. Zaal, Joost D. Driesens & Raoul M. Bongers
Zoomable user interfaces: Ecological and Enactive 239
Mounia Ziat, Olivier Gapenne, Charles Lenay & John Stewart

List of demos 241


First Author Index 243
Keyword Index 245
3rd International Conference on Enactive Interfaces (Enactive /06) 9
Introduction

Introduction

The Third International Conference on Enactive Interfaces (Enactive / 06) is the annual conference
of the European Network of Excellence ENACTIVE. Extending our first two meetings, the aim of
the conference is to encourage the emergence of a multidisciplinary research community in a new
field of research and on a new generation of human-computer interfaces called Enactive Interfaces.
Enactive Interfaces are inspired by a fundamental concept of “interaction” that has not been
exploited by other approaches to the design of human-computer interface technologies.

ENACTIVE knowledge is information gained through perception-action interactions with the


environment. Examples include information gained by grasping an object, by hefting a stone, or by
walking around an obstacle that occludes our view. It is gained through intuitive movements, of
which we often are not aware. Enactive knowledge is inherently multimodal, because motor actions
alter the stimulation of multiple perceptual systems. Enactive knowledge is essential in tasks such
as driving a car, dancing, playing a musical instrument, modelling objects from clay, performing
sports, and so on. Enactive knowledge is neither symbolic nor iconic. It is direct, in the sense that it
is natural and intuitive, based on experience and the perceptual consequences of motor acts.

ENACTIVE / 06 highlights particularly convergences between the concept of Enaction and the
sciences of complexity. Biological, cognitive, perceptual or technological systems are complex
dynamical systems exhibiting (in)stability properties that are consequential for the agent-
environment interaction. The conference provides new insights, through the prism of ENACTIVE
COMPLEXITY, about human interaction with multimodal interfaces.

Building on the tradition of past symposia, these Proceedings present the state of the art in theory
and applications of Enaction, in a wide variety of topics and disciplines. All together, ENACTIVE /
06 includes 2 keynote lectures, 3 thematic symposia (4 speakers each), 22 free talks, 69 posters and
13 hands on / demos, for a total of 118 presentations. Speakers originate from 18 countries,
including Belgium, Brazil, Canada, China, Denmark, Finland, France, Greece, Ireland, Israel, Italy,
Japan, The Netherlands, Spain, Sweden, Switzerland, United Kingdom, USA. Participants at the
conference include about 200 researchers, students, or engineers, from several disciplines in
science, technology, and art.

Many thanks are due.


10
In the first place, we wish to thank all the scientists who contributed to this volume and
participated in the conference, and we express our gratitude and encouragement to the authors
whose submissions were not successful. We also express our gratitude to our co-chairs, Teresa
Gutiérrez, Roberto Casati, and Carsten Preusche, for their help in preparing the scientific program
of ENACTIVE / 06, and to the 51 members of the review committee who have evaluated the
contributions. They are too numerous to appear here but their names are listed in this volume. To
all, thank you for having contributed to this conference.

The project benefited from the constant support of Marielle Cadopi, Dean of the Faculty of
Sport and Movement Sciences of the University Montpellier-1 (UM1), and we thank her warmly.
Thanks are also due to Dominique Deville de Périère, President of UM1, and to Jean-Claude Rossi,
Research Vice-president of UM1, for their continuous support. Special thanks are due to Philippe
Paillet, the director of the Financial and Economic Division of UM1, and to his team, for their
irreplaceable help.

Support for the Third International Conference on Enactive Interfaces was provided by the
European Network of Excellence ENACTIVE, the University Montpellier-1, the Faculty of Sport
and Movement Sciences, the Motor Efficiency and Deficiency Laboratory, The Institut
Universitaire de France, the French Ministère de l’Education Nationale, de l’Enseignement
Supérieur et de la Recherche, the Conseil Régional Languedoc-Roussillon, the Conseil Général de
l’Hérault, and the Montpellier Agglomération. Without these sources of funding, neither the
conference nor the publication of the present volume would have been possible.

Finally, we would like to express our gratitude to the members of the Motor Efficiency and
Deficiency Laboratory, and specifically to Valérie Caillaud, Didier Delignières, Elise Faugloire,
Johann Issartel, Perrine Guérin, Julien Lagarde, Stefano Lazzari, Bruno Mantel, Ludovic Marin,
Lorène Milliex, Sofiane Ramdani, Kjerstin Torre, Deborah Varoqui for their help in organizing
ENACTIVE / 06.

Bruno Mantel (web site), Perrine Guérin (proceedings), Julien Lagarde (program), Lorène
Milliex and Valérie Caillaud (general secretary), Caroline Bourdarot and Céline Gauchet (Logistic
and Finances) have contributed tremendously to the preparation of this event. Without them, this
conference would not have been possible.

Montpellier, November 2, 2006

Benoît G. Bardy
Denis Mottet

Chairs, Enactive / 06
3rd International Conference on Enactive Interfaces (Enactive /06) 11

Conference Committees

Chairs

Benoît Bardy
University Montpellier 1
Denis Mottet
University Montpellier 1

Co-Chairs

Technology and Applications


Teresa Gutiérrez
LABEIN, Bilbao

Human Action and Perception


Roberto Casati
Institut Jean Nicod, Paris

Hands-on Demos and Tutorials


Carsten Preusche
DLR, Oberpfaffenhofen

Organizing Committee

Benoît Bardy
Marielle Cadopi
Valérie Caillaud
Didier Delignières
Elise Faugloire
Perrine Guérin
Johann Issartel
Julien Lagarde
Stefano Lazzari
Bruno Mantel
Ludovic Marin
Lorène Milliex
Denis Mottet
Sofiane Ramdani
Kjerstin Torre
Déborah Varoqui
12
Review Committee
Federico Avanzini Julien Lagarde
University of Padova University Montpellier 1
Carlo Avizzano Stefano Lazzari
PERCRO - SSSA University Montpellier 1
Benoît Bardy Annie Luciani
University Montpellier 1 Institute National Polytechnique
Michel Beaudoin-Lafon of Grenoble
University Paris Sud XI Charlotte Magnusson
Peter Beek Lunds University
Free University of Amsterdam Ludovic Marin
David Benyon University Montpellier 1
Napier University Denis Mottet
Delphine Bernardin University Montpellier 1
McGill University Alva Noë
Ronan Boulic University of California-Berkley
EPFL Sile O Modhrain
Stephen Brewster Queen’s University of Belfast
University of Glasgow Olivier Oullier
Marielle Cadopi University of Provence
University Montpellier 1 Chris Pagano
Richard Carson Clemson University
Queen’s University of Belfast Jean Petitot
José Da silva Ecole Polytechnique
University of São Paulo-RP Carsten Preusche
Didier Delignières German Aerospace Center
University Montpellier 1 Sofiane Ramdani
Barbara Deml University Montpellier 1
University of the Bundeswehr Munich Chris Raymaekers
Marcos Duarte Hasselt University
University of São Paulo-SP Mike Riley
Elise Faugloire University of Cincinnati
University of Minnesota Eva-lotta Sallnäs
John Flach Royal Institute of Technology
Wright State University Emilio Sanchez
Antonio Frisoli CEIT
PERCRO- SSSA Masato Sasaki
Yves Guiard University of Tokyo-Hongo
CNRS Isabelle Siegler
David Guiraud Université Paris Sud XI
INRIA John Stewart
Teresa Gutteriez COSTECH
LABEIN Tom Stoffregen
Mountaz Hascoët University of Minnesota
University Montpellier-2 Ian Summers
Gunnar Jansson University of Exeter
Uppsala University Indira Thouvenin
John Kennedy Tech. University of Compiègne
University of Toronto Steve Wallace
Hyung Seok Kim San Francisco State University
University of Geneva Marcelo Wanderley
Kazutoshi Kudo SPCL-McGill University
University of Tokyo-Komaba
3rd International Conference on Enactive Interfaces (Enactive /06) 13

KEYNOTES
14
3rd International Conference on Enactive Interfaces (Enactive /06) 15
Enactive perception: Sensorimotor expectancies or perception-action
invariants?

William H. Warren
Department of Cognitive and Linguistic Sciences, Brown University, USA
bill_warren@browm.edu

Enactive interfaces are predicated on the notion of enactive knowledge, a form of non-
symbolic, intuitive knowledge grounded in the act of "doing." How are we to conceive of enactive
knowledge? I will discuss two desiderata. First, it must support interactions with a virtual world or
control space that are parallel to those of natural perceiving and acting. Second, it must support an
experience of perceptual "presence," capturing the intentionality or object-directedness of natural
perceiving and acting. I argue that enactive knowledge is constituted by perception-action
mappings, and that these can only be understood in the context of the behavioral dynamics of an
activity. The perceptual presence of objects derives from invariant information defined over
perception and action, not from sensory-motor expectancies. This permits an enactive account of
interaction and intentionality that avoids the circularity of motor theories of mind.

About William H. Warren:


William H. Warren is Professor and Chair of the Department of Cognitive and Linguistic
Sciences and the co-director of the Virtual Environment Navigation Lab (VENLab) at Brown
University, Providence, RI, USA. He uses virtual reality techniques to study the visual control of
human locomotion, spatial navigation, and the dynamics of perceptual-motor behavior.
Warren received his Ph.D. in Experimental Psychology from the University of Connecticut
(1982), did post-doctoral work at the University of Edinburgh (1983), and has been a professor at
Brown ever since.
He is the recipient of a National Research Service Award from NIH (1983), a Fulbright
Research Fellowship (1989), a Research Career Development Award from NIMH (1997-2002), and
Brown's Elizabeth Leduc Teaching Award for Excellence in the Life Sciences.

References
Warren, W. H. (2006. The dynamics of perception and action. Psychological Review, 113(2), 358-389.
Foo, P., Warren, W. H., Duchon, A., & Tarr, M. J. (2005). Do humans integrate routes into a cognitive map? Map- vs.
landmark-based navigation of novel shortcuts. Journal of Experimental Psychology: Human Memory and
Learning, 31, 195-215.
Warren, W. H. (2004). Optic flow. In L. Chalupa & J. Werner (Eds.) The Visual Neurosciences, v. II. Cambridge, MA:
MIT Press, 1247-1259.
Fajen, B. R., & Warren, W. H. (2003). Behavioral dynamics of steering, obstacle avoidance, and route selection.
Journal of Experimental Psychology: Human Perception and Performance, 29, 343-362.
Tarr, M. T., & Warren, W. H. (2002). Virtual reality in behavioral neuroscience and beyond. Nature Neuroscience
(Suppl.), 5, 1089-1092.
Warren, W. H., Kay, B. A., Duchon, A. P., Zosh, W., & Sahuc, S. (2001). Optic flow is used to control human walking.
Nature Neuroscience, 4, 213-216.
16
3rd International Conference on Enactive Interfaces (Enactive /06) 17
Understanding manipulation intelligence by observing, transfering and
robotizing it

Yasuyoshi Yokokohji
Department of Mechanical Engineering and Science, Graduate School of Engineering,
Kyoto University, Japan
yokokoji@mech.kyoto-u.ac.jp

We are far from fully understanding the intelligence behind manual dexterity. Such
intelligence is often called "skill" or "manipulation intelligence". One of the goals of robotics has
been to make a mechanical system (i.e. robotic hand) that has the same dexterity as humans.
However, research has tended to duplicate the appearance of a human hand without a clear
understanding of manipulation intelligence.
In this talk, I will introduce our efforts towards understanding manipulation intelligence. For
example, although the primary purpose of a teleoperation system is often to perform a complicated
tasks within hazardous and unstructured environments, we have chosen to utilize the teleoperation
system in a novel manner, as a research platform upon which to observe human activities during a
task. We have also developed a VR (virtual reality) platform to simulate multi-finger grasping tasks,
the groundwork for which will be based on intensive observation of human grasping. In addition,
we are now investigating the possibility of using a robotic system as a medium for transferring
manipulation intelligence from one human to others; the fundamental concept behind is called
"mechano-media". Finally, I will show the results of recent research we conducted utilizing a
robotic hand capable of performing a highly sophisticated task, the appearance of which is not at all
human hand.

About Yasuyoshi Yokokohji :


Yasuyoshi Yokokohji received B.S. and M.S. degrees in precision engineering in 1984 and
1986, respectively, and Ph.D. degrees in mechanical engineering in 1991 all from Kyoto University,
Kyoto, Japan.
From 1988 to 1989, he was a Research Associate in the Automation Research Laboratory,
Kyoto University. From 1989 to 1992, he was a Research Associate in the Division of Applied
Systems Science, Faculty of Engineering, Kyoto University. From 1994 to 1996, he was a visiting
research scholar at the Robotics Institute, Carnegie Mellon University, Pittsburgh, PA. He is
currently an Associate Professor in the Department of Mechanical Engineering, Graduate School of
Engineering, Kyoto University. His current research interests are robotics and virtual reality
including teleoperation systems, vision-based tracking, and haptic interfaces.
Dr. Yokokohji is a member of the Institute of Systems, Control and Information Engineers
(Japan), the Robotics Society of Japan, the Society of Instruments and Control Engineers (Japan),
the Japan Society of Mechanical Engineers, the Society of Biomechanisms of Japan, the Virtual
Reality Society of Japan, IEEE and ACM.

References
Yokokohji, Y., Muramori, N., Sato, Y., & Yoshikawa, T. (2005). Designing an Encountered-Type Haptic Display for
Multiple Fingertip Contacts based on the Observation of Human Grasping Behavior. International Journal of
Robotics Research, 24(9), 717-730.
Yokokohji, Y., Kitaoka, Y., & Yoshikawa, T. (2005). Motion Capture from Demonstrator's Viewpoint and Its
Application to Robot Teaching. Journal of Robotic Systems, 22(2), 87-97.
Yokokohji, Y., Iida, Y., & Yoshikawa, T. (2003). 'Toy Problem' as the Benchmark Test for Teleoperation Systems.
Advanced Robotics, 17(3), 253-273.
Yokokohji, Y., Hollis, R. L., & Kanade, T. (1999). WYSIWYF Display: A Visual/Haptic Interface to Virtual
Environment. PRESENCE, Teleoperators and Virtual Environments, 8(4), 412-434.
Yokokohji, Y., & Yoshikawa, T. (1994). Bilateral Control of Master-Slave Manipulators for Ideal Kinesthetic Coupling
-Formulation and Experiment. IEEE Transaction on Robotics and Automation, 10(5), 605-620.
18
3rd International Conference on Enactive Interfaces (Enactive /06) 19

THEMATIC SYMPOSIA
20
3rd International Conference on Enactive Interfaces (Enactive /06) 21
Enaction and the concept of Presence

David Benyon Elena Pasquinelli


School of Computing Institut Jean Nicod
Napier University Ecole des Hautes Etudes en Sciences Sociales
Edinburgh, UK Paris, France
D.Benyon@napier.ac.uk Elena.Pasquinelli@ehess.fr

1
59 32 80

David Benyon
School of Computing
Napier University
10, Colinton Road Edinburgh, EH14 1LJ, UK
D.Benyon@napier.ac.uk

Wijnand Ijsselsteijn
Human-Technology Interaction Group
Eindhoven University of Technology
P.O. Box 513 5600 Eindhoven, The Netherlands
w.a.ijsselsteijn@tue.nl

George Papagiannakis
MIRALab
Université de Génève,
10, Rue du Conseil Général CH-1211 Genève 4, Switzerland
papagiannakis@miralab.unige.ch

Thomas Stoffregen
Human Factors Research Laboratory
University of Minnesota
202B Cooke Hall 1900 University Ave. S.E. Minneapolis, MN 55455, USA
tas@umn.edu
22
Enaction and the concept of Presence
David Benyon1 & Elena Pasquinelli2
1
School of computing, Napier University, UK
2
Institut Jean Nicod, Ecole des Hautes Etudes en Sciences Sociales, France
d.benyon@napier.ac.uk

Bruner describes three systems or ways of organizing knowledge and three correspondent
forms of representation of the interaction with the world: enactive, iconic and symbolic (Bruner,
1966; Bruner, 1968). Enactive knowledge is constructed on motor skills, enactive representations
are acquired by doing and doing is the mean for learning in an enactive context. Enactive
interaction is direct, natural and intuitive. In order to give rise to believable experiences with
enactive interfaces it is necessary to respect certain conditions of the interaction with the real world,
such as the role played by action in the shaping of the perceptual content, the role of active
exploration and the role of perception in the guidance of action. The close coupling of the
perception-action loop is hence a key characteristic of enactive interfaces.

These notions resonate with Dourish’s (2001) embodied interaction and the ‘digital ground’ of
Malcolm McCullough (2004) that emphasises a sense of place and environmental knowing in an era
of ubiquitous computing. Meaning is not directly assignable to sentences of a formal or symbolic
language. Thinking is derived from an embodied and social experience based on foundational,
figurative concepts projected from previous experience. Underlying these ideas is a philosophy of
phenomenology and the idea of ‘being-in-the-world’ emphasised by Heidegger and Merleau-Ponty
amongst others.

Research into the concept of presence also draws upon these notions. Presence has been
defined as ‘the illusion of non-mediation’ (Lombard and Ditton, 1997). It is concerned with
developing technologies that make it possible for people to feel as if they are somewhere else, or to
feel that distant others are in fact close by. It is concerned with telepresence and haptic interaction,
with new places for interaction, with gesture, full body and other forms of multimodal interactions.

The aim of this special session at Enactive/06 is to explore relationships and differences
between Enaction and Presence. Members drawn from the Presence community by the Peach
Network of Excellence in Presence Research will debate fundamental concepts, philosophical roots
and practical examples of enactive interfaces and presence with people drawn from the Enactive
network.

The debate will concern the following questions:

1. Believability is a notion that has been explored in the domains of literature, theatre, film, radio
drama ad other non-interactive media. Believability has been conceived as the illusion of life,
the illusion that even the odd, caricatured, unrealistic characters of animation movies are alive.
Following this perspective, believable characters and agents in VR should be designed in such a
way as to provide the illusion of life and as to permit the suspension of disbelief. But
believability as a mental state of illusion of reality is far for being conceptually and
psychologically clear. It can be understood as an illusion of non-mediation. But it is not evident
that users and film or other media audience mistakenly take a fake world for real nor that
believability require this error. People in VR worlds know they are experiencing something that
is not real, yet they may react as if it were real. Can we consider Believability as one condition
that makes users and spectators act and react as if the virtual world was real? In this case, which
actions and reactions should be taken into account as manifestations of believability?
3rd International Conference on Enactive Interfaces (Enactive /06) 23
2. The notion of realism as simulation of the real world is problematic too. The Japanese roboticist
Mori has introduced the notion of Uncanny Valley: the sense of familiarity increases in
association to realistic robotic artefacts (human-like robots) up to a certain point and then comes
to a valley of negative familiarity and strangeness. At the end of the valley there is another,
higher peak of positive familiarity associated with perfectly human-like artefacts and real
humans. The fall in the valley of uncanny is especially experienced in presence of discrepancies
between visual cues and cues from other sensory modalities and between the realism of the
physical aspect and a lower level of realism for behavioural aspects. Is photorealism enough to
produce faithfully the physical characteristics of the real world or should we be concerned with
other sensory modalities? Is it possible that coherence between different levels of simulation
and different aspects of the virtual worlds and agents play a role that is more important than
realism? Is realism as simulation of the properties of the real world a necessary condition for
producing believable worlds and agents?

3. Feelings of connectedness come through presence allowing interfaces that provide a high degree
of presence to extend a person’s influence. A surgeon performing a tele-operation wants to feel
as if he or she is working on a body immediately to hand. The surgeon wants to make
movements and gestures as they would in the real world. What meanings can gestures have at
the technological scale and at the human scale?

4. The sense of immersion is also important to a sense of presence. But in mixed realities the
human is effectively immersed in both the real and the digital. How can realism, consistency
and believability be maintained as people move between representations. How can we design to
make interactions with digital artefacts as natural and intuitive as interactions with the physical
world with which we are so familiar?

References
Bruner, J. (1966). Toward a theory of instruction. Cambridge, MA: Belknap Press of Harvard University Press.
Bruner, J. (1968). Processes of cognitive growth: Infancy. Worcester, MA: Clark University Press.
Dourish, P. (2001). Where the action is. MIT Press.
Lombard, M., & Ditton, T. (1997). At the Heart of It All: The Concept of Presence. Journal of Computer-Mediated
Communication, 3(2). 1-39.
McCullough, M. (2005). Digital Ground. MIT Press.
http://www.ascusc.org/jcmc/vol3/issue2/lombard.html
24

David Benyon
School of Computing, Napier University, UK
D.Benyon@napier.ac.uk

Dr. David Benyon is Professor of Human-Computer Systems at Napier University, Edinburgh.


Prof Benyon was a lead academic on the Benogo project funded under the Framework 5 FET
Presence initiative. This work has led to important theoretical work on a sense of place and to a
number of novel approaches to Human-Computer Interaction. His research focus is on ‘navigation
of information space’, a new view of HCI that focuses on how people find their way around the
information spaces created by new media. He has also published on semiotics and new media and
on applying experientialism to new media. The book Designing with Blends is due to be published
by MIT press in 2007. Prof Benyon is a core member of the Peach co-ordination action for Presence
Research in Europe funded by the EU under Framework 6.

Position Statement
Through the experiences of the Framework 5 Presence project, Benogo, we were concerned
with reproducing real places. Issues of fidelity of representation were dealt with through using
photo-realistic images, but these were not enough; we also wanted to capture the sense of place
experienced, so as to design for presence as a sense of place. Presence is also concerned with
connectedness and embodiment in interaction. Presence allows people to extend their reach so that
media truly becomes an ‘extension of man’ in McCluhan’s terms. Interaction can move from the
technological focus of current designs to the ‘human scale’ needed for natural enactive interaction.
A third strand comes from mixing realities, when the digital is overlaid on the physical. Interaction
in these environments demands knowledgeable input from both the technology that operates in the
digital world and the people who operate in the physical world.
3rd International Conference on Enactive Interfaces (Enactive /06) 25

Wijnand Ijsselsteijn
Human-Technology Interaction Group, Eindhoven University of Technology, The Netherlands
w.a.ijsselsteijn@tue.nl

Dr. Wijnand IJsselsteijn has a background in psychology and artificial intelligence, with an
MSc in cognitive neuropsychology from Utrecht University, and a PhD in media psychology/HCI
from Eindhoven University of Technology. Since 1996, he's been active as a researcher working on
the scientific investigation of how humans experience and interact with advanced media
technologies. His current research interests include digital social media, embodiment of technology,
and the psychological effects of immersive interactive media systems.
He has been project coordinator for the Presence Research Working Group, which created the
vision upon which the first IST FET Presence Research Initiative was based, and the OMNIPRES
project, which was aimed towards theoretical integration, measurement validity, and increasing the
larger R&D impact of the Initiative. He is currently involved in the PASION project as leader of the
Basic Research workpackage.

Position Statement
Presence research studies the experience of being in a place or being with someone as it is
mediated through technology. The fact that media technology, such as teleoperator or simulator
systems, can start working as a transparent extension of our own bodies is critically dependent on
(i) intuitive interaction devices which are ‘invisible-in-use’, seamlessly matched to our sensorimotor
abilities, and (ii) the highly plastic nature of our brain, which is continuously able and prone to
adapt to altered sensorimotor contingencies. The perception of ourselves as part of an environment,
virtual or real, critically depends on the ability to actively explore the environment, allowing the
perceptual systems to construct a spatial map based on predictable and specific sensorimotor
dependencies. Provided the real-time, reliable correlations between motor actions and multisensory
inputs remain intact, the integration of telepresence technologies into our dynamic loop of
sensorimotor correspondence can be usefully understood as a change in body image perception – a
phenomenal extension of the self. Based on insights which emerge, at least in part, from an enactive
approach to perception, research in telepresence may be usefully centred around the ability of media
technology to transform reality into an augmented environment our bodies and brains are better
equipped to deal with. In this way, telepresence technologies become ‘mind tools’ - enhancing our
perceptual, cognitive, and motor abilities, and profoundly changing our perception of self in the
process.
26

George Papagiannakis
MIRALab, University of Geneva, Switzerland
papagiannakis@miralab.unige.ch

Dr. George Papagiannakis is a postdoc researcher at MIRALab, University of Geneva. He


obtained his PhD in Computer Science from University of Geneva, his B.Eng. (Hons) in Computer
Systems Engineering, at the University of Manchester Institute of Science and Technology
(UMIST) and his M.Sc (Hons) in Advanced Computing, at the University of Bristol. His research
interests are mostly confined in the areas of mixed reality illumination models, real-time rendering,
virtual cultural heritage and programmable graphics. He has actively been involved and contributed
to the CAHRISMA, LIFEPLUS, STAR and ERATO European projects. Currently he is
participating in the ENACTIVE and EPOCH EU projects. MIRALab is the coordinator of the
Believability workpackage in the ENACTIVE project.

Position Statement
Over the last few years, many different systems have been developed to simulate scenarios
with interactive virtual humans in virtual environments in real-time. The control of these virtual
humans in such a system is a widely researched area, where many different types of problems are
addressed, related to animation, speech, deformation, and interaction, to name a few research topics.
The scope of applications for such systems is vast, ranging from virtual training or cultural heritage
to virtual rehabilitation. Although there are a variety of such platforms available with many
different features, we are still a long way from a completely integrated framework that is adaptable
for many types of ‘believable’ applications. Mixed Realities (MR) and their concept of cyber-real
space interplay invoke such interactive experiences that promote new patterns of believability and
presence. Believability is a term used to measure the realism of interaction in the MR environments
whereas Presence is defined as the measure that is used to convey the feeling of ‘being there’. Our
typical rich media storytelling case study allows virtual characters in mobile AR. Although presence
is strengthened, believability is not keeping its pace, due to limited interaction between the real
participants and the virtual characters, as part of limitations of mobile technology. We argue that
future steps in mobile MR enabling technologies should cater for enhanced social awareness of the
virtual humans to the real world and new channels for interactivity between the real users and
virtual actors. Only then the believability factor of virtuality structures will be enhanced and allow
for compelling real experiences through mobile virtual environments.
3rd International Conference on Enactive Interfaces (Enactive /06) 27

Thomas A. Stoffregen
Human Factors Research Laboratory, University of Minnesota, USA
tas@umn.edu

Thomas A. Stoffregen is Professor of Kinesiology, and Director of the Human Factors


Research Laboratory at the University of Minnesota. His research interests include perception and
action in virtual environments, multimodal perception, embodied cognition, and the theory and
design of simulation and virtual reality systems.

Position statement
There seems to be a powerful desire to equate the quality of simulation of virtual environment
systems with their ability or tendency to induce certain subjective experiences in users, such as the
feeling of “being there”. Many concepts used in the study and design of virtual environment
systems are not rigorously defined, and have no widely accepted definition (e.g., presence,
immersion, fidelity). Weak definitions make it difficult to use such concepts for the formulation or
testing of empirical hypotheses about user performance. Weak concepts also make it difficult to
determine what factors truly are important in evaluating the quality of simulation and virtual
environment systems. In some applications (e.g., entertainment), the subjective experience of users
is the chief criterion for system quality. However, in many other applications, the subjective
experience of users is irrelevant, or of only indirect relevance to the quality of the system. In many
applications, it is the user’s behavioral interaction with the system that matters most, and the
success of this behavioral interaction often can be addressed through direct measures of behavior,
rather than though indirect measures of subjective experience.
28
3rd International Conference on Enactive Interfaces (Enactive /06) 29
Interpersonal Enactive Interaction

Richard Schmidt Ludovic Marin


Psychology Department Motor Efficiency and Deficiency Laboratory
College of the Holy Cross University Montpellier 1
Worcester, USA Montpellier, France
rschmidt@holycross.edu ludovic.marin@univ-montp1.fr

Spontaneous synchronization and social memory in interpersonal coordination


dynamics
Olivier Oullier, Gonzalo C. de Guzman, Kelly J. Jantzen, Julien Lagarde & J. A. S Scott Kelso
Olivier Oullier
Laboratoire de Neurobiologie Humaine (UMR 6149)
University of Provence – CNRS
Pôle Comportement Cerveau & Cognition
3, place Victor Hugo, 13331 Marseille cedex 03, France
oullier@up.univ-mrs.fr

Consequences of dance expertise on interpersonal interaction


Johann Issartel, Ludovic Marin & Marielle Cadopi
Johann Issartel
Motor Efficiency and Deficiency Laboratory,
Faculty of Sport Sciences
University Montpellier 1
700 Avenue du Pic St Loup, 34090 Montpellier, France
johann.issartel@univ-montp1.fr

Visual and verbal informational constraints on interpersonal coordination


Michael J. Richarson
Michael J. Richardson
Department of Psychology
Colby College
5550 Mayflower Hill, Waterville, Maine, 04901 USA
mjrichar@colby.edu

Perception of an intentional subject: An enactive approach


Charles Lenay, Malika Auvray, Francois-David Sebbah & John Stewart
Charles Lenay
COSTECH
Department TSH, Bât. P. Guillaumat,
Technological University of Compiègne
BP60319 - 60206 Compiègne Cedex, France
charles.lenay@utc.fr
30
Interpersonal Enactive Interaction

Richard Schmidt1 & Ludovic Marin2


1
Psychology Department, College of the Holy Cross, USA
2
Motor Efficiency and Deficiency Laboratory, University Montpellier 1, France
rschmidt@holycross.edu

“Enactive knowledge” is the knowledge stored in the form of motor responses and it is
acquired by the act of "doing". For example being an expert in playing the violin requires mastering
the interaction between the hand motor experience of the player and the sound of the violin
produced by his/her hands. This kind of knowledge is based on the experience and on the perceptual
responses to the motor acts of the player. This same enactive knowledge can be observed when two
persons interact together, which is called interpersonal interaction. In such a situation what both
persons “learn” is based on person 1’s perception of person 2 reacting to the movements of person
1. Interpersonal enactive interaction is rarely studied in the literature. However, it is present in our
everyday life as soon as two persons have perceptual contact. In this symposium we will remedy
this situation by showing the ubiquity of interpersonal enactive interaction through four different
talks. The first presentation, by Olivier Oullier and colleagues, will illustrate how the movements of
two human beings become spontaneously synchronized when vision of each others’ actions is
available, and that a remnant of the interaction remains (a kind of ‘social memory’) when vision is
removed. In the second presentation, authors (Johann Issartel and colleagues) will focus their
presentation on the consequences of expertise (in dance or in sport) on interpersonal interactions.
For example, when two dancers facing each other are instructed to move without any constraints,
they interacted differently than non-dancers. The first are able to perform pluri-frequency
coordinations whereas the latter cannot. In the third talk, Michael Richardson will highlight the
ubiquitous nature of interpersonal coordination by presenting data from three experiments aimed at
understanding how visual and verbal information provide a coordinative medium for interpersonal
postural and rhythmic coordination. In the last presentation, Charles Lenay will show that the
perceptual interaction between two subjects, studied under minimalist experimental conditions,
reveals a specific perceptive dynamic; a collective dynamics that can be used as a cue of the other’s
presence.
3rd International Conference on Enactive Interfaces (Enactive /06) 31
Spontaneous synchronization and social memory
in interpersonal coordination dynamics

Olivier Oullier1,2, Gonzalo C. de Guzman,2, Kelly J. Jantzen2,


Julien Lagarde3,2 & J.A. Scott Kelso2
1
Laboratoire de Neurobiologie Humaine (UMR 6149), University of Provence-CNRS, France
2
Human Brain & Behavior Laboratory, Center for Complex Systems and Brain Sciences,
Florida Atlantic University, USA
3
Motor Efficiency and Deficiency Laboratory, University Montpellier 1, France
oullier@up.univ-mrs.fr

What mechanisms underlie the formation and dissolution of bonds between human beings?
And how might such processes be quantified? To address these fundamental questions, we
introduce a simple experimental paradigm that reveals how the coupling between two people
emerges spontaneously, and the factors that govern it.

Pairs of participants executed rhythmic finger flexion-extension movements, each at their own
preferred frequency and amplitude without any external pacing from a metronome. They were not
given any instructions regarding the way to move with respect to each other. Experimental trials
were equally partitioned into three contiguous segments each of equal duration within which both
subjects either had their eyes closed or open. During the latter, participants looked at each other’s
finger motion and were also able to see their own finger. Two experimental conditions were
preformed in which movement was uninterrupted: Eyes Open-Closed-Open (OCO) and eyes
Closed-Open-Closed (COC)

When the eyes were closed, each subject produced movements independently at their own
frequency. Due to these intrinsic frequency differences, the relative phase φ between the subjects’
finger motions exhibited phase wrapping. However, following a simple auditory cue to open their
eyes, subjects spontaneously adopted in-phase motion, φ stabilizing around 0°. On a signal to close
the eyes again, the individual movement frequencies diverged and φ fell back into phase wrapping.
Spontaneous in-phase patterns also emerged in the OCO condition in the time segments when the
subjects had their eyes open.
Overall, with eyes open, subjects mutually couple at a common phase and frequency, whereas
with eyes closed, subjects’ movement trajectories diverge and they behave independently. Such
two-way mutual coupling is truly a result of spontaneous social interaction and may be
distinguished from previous dyadic studies in which one person may simply be intentionally
tracking (or driving) the other (Oullier et al., 2003; Schmidt et al., 1990; Temprado et al., 2003) or
maintaining his own rhythm (Schmidt & O’Brien, 1997).

At first blush, our results might be considered as just another, albeit remarkable instantiation
of entrainment that entails nothing more than a couple of oscillators and a medium of information
exchange. Thus, one might be tempted to invoke the often-used Huygens’ clocks picture to describe
and explain this kind of self-organized social coordination process. A closer look at the frequency
distributions in the COC condition, however, reveals that subjects do not revert to their initial
‘preferred’ frequency when visual exchanged is removed. Unlike mechanical oscillators, they adopt
a new frequency that is different from each subject’s pre-coupling frequency. Might the latter
reflect some kind of remnant of the social encounter?
To quantify this hypothesized ‘social memory’ effect, we analyzed the movement frequencies
and the power spectrum overlap PSO for the COC condition in two ways. First, using the PSO, we
measured ‘how similar’ to each other the subjects’ movement frequencies were, before and after
visual contact. The logic was that if the finger ‘oscillators’ act classically, à la Huygens’ clocks for
example, and revert to their respective intrinsic behaviours after severing visual contact, the PSO
32
should be identical for the two eyes-closed time segments of the COC condition. Calculations
reveal, however, that movement frequencies between each member of the dyad were significantly
more similar after spontaneous coupling than before in spite of the absence of visual exchange in
both cases. Instead of returning to their preferred frequency following the removal of visual
information, subjects appeared to be continually influenced by the previous coupled state. This is
also clear in the second measure, where we tracked the individual successive differences in peak
movement frequencies as a subject traverses the three time segments of the trial. Direct comparison
of the two eyes-closed segments (i.e. between pre-and post coupling) revealed significant
differences for each subject. After viewing each other’s finger movements, subjects did not relax
back to their initial frequency but adopted a new one as a result of their interaction.

What features of the visual information exchange might have facilitated spontaneous social
coordination? Evidence from both behavioural and neurophysiological studies indicates that the
formation of a coherent state depends on the functional and biological relevance of the stimulus that
participants provide to each other. Moreover, the crucial role played by “mirror neurons” during
both execution and observation of a movement suggests a potential neurophysiological basis for the
special nature of spontaneous social coordination we have observed where subjects execute and
observe the same movement at the same time.
A serendipitous finding was the consistent and persistent effect of the temporary phase- and
frequency-locked coupling on subsequent behaviours even after the social encounter was over and
subjects no longer exchanged information. Whereas many physical, biological and social systems
exhibit spontaneous entrainment in their natural setting, little attention has been paid to the
influence of coupling once uncoupled behaviour is resumed. Classical analysis based on Huygens’
coupled clocks is inadequate to explain the persistent effects of a temporary coupling such as found
in here. The extent and duration of such carryover or remnant effects may reflect the strength of the
bond that is formed between people, place in the social hierarchy (e.g. boss~employee relation), the
willingness of each participant to cooperate, gender differences, personality characteristics (e.g.
degree of extraversion) and the significance each participant attaches to the social encounter. This
paradigm enables such issues to be precisely quantified in a well-defined experimental situation.
Although observational methods have elucidated various forms of social behaviour, the
present study offers a novel perspective and new metrics to explore systematically a fundamental
form of human bonding or lack thereof, and the self-organizing processes that underlie its
persistence and change.

References
Oullier, O., de Guzman, G. C., Jantzen, K. J., & Kelso, J. A. S. (2003). On context dependence of behavioral variability
in inter-personal coordination. International Journal of Computer Science in Sports, 2, 126-128.
Schmidt, R. C., Carello, C., & Turvey, M. T. (1990). Phase transitions and critical fluctuations in the visual
coordination of rhythmic movements between people. Journal of Experimental Psychology: Human Perception
and Performance, 16, 227-247.
Schmidt, R. C., & O'Brien, B. (1997). Evaluating the dynamics of unintended interpersonal coordination. Ecological
Psychology, 9, 189-206.
Temprado, J. J., Swinnen, S. P., Carson, R. G., Tourment, A., & Laurent, M. (2003). Interaction of directional,
neuromuscular and egocentric constraints on the stability of preferred bimanual coordination patterns. Human
Movement Science, 22, 339-363.
3rd International Conference on Enactive Interfaces (Enactive /06) 33
Consequences of dance expertise on interpersonal interaction
Johann Issartel, Ludovic Marin & Marielle Cadopi
Motor Efficiency and Deficiency Laboratory, University Montpellier 1, France
johann.issartel@univ-montp1.fr

According to Newell’s model (1986), coordination emerges from the interaction of constraints
operating at the level of the participant (intrinsic properties), the task and the environment. To date,
this model has been used in the study of intra-coordination (Newell, 1986) and postural
coordination (Marin et al., 1999). But it has not been confirmed yet on interpersonal coordination.
The specificity of interpersonal coordination is that there are at least two participants involved in
the same coordination. Consequently the emergence of the coordination is based on the intrinsic
property interactions of the couple. The goal of this presentation is to manipulate intrinsic properties
of the couple, and by extension, to confirm that Newell’s model is well suited to interpersonal
coordination. With respect to the experiments of Marin et al. (1999), manipulating the couples’
expertise constitutes a participant constraint. In this study we will compare expert modern dancers
with non-dancers. The choice of modern dancers as a representative population of experts is
justified because dance experts are accustomed to 1) acting collectively with other dancers and 2)
performing free movements (similar to an improvisational situation). In an improvisational task,
movements are made straight away, they emerge from the context. We hypothesize that the kind of
interaction, and so the nature of the interpersonal coordination (Schmidt et al, 1990), will be
different between dancers and non-dancers. Each of these two groups should have a specific mode
of coordination.

Method
Two groups took part in the experiment. The first group comprised twelve right-handed
modern dance experts (5 females and 7 males), all of whom had more than 10 years’ experience in
modern dance. The second group consisted of twelve right-handed non-dancers (4 females and 8
males). Within each group, participants were randomly paired. For all experimental conditions,
participants were instructed to freely move their right forearm in the sagittal plane by exploring,
without constraint, the whole range of frequency, amplitude and phase. They sat on a chair at
normal height with their right elbow resting on the surface of the table. Their wrist and fingers had
to stay in line with the forearm, so that from the elbow to the fingertips the forearm and hand moved
together as one fixed unit. The palm of the hand was oriented towards the left and their gaze was
fixed in front of them. There were two experimental situations with 6 trials in each condition. In the
1st condition (Couple), the two participants sat facing each other and were instructed to take into
account the movements of the other to perform their own movements. Specifically, participant 1
was instructed to move as a function of participant 2 and inversely. In a second condition (Alone),
each participant was alone in order to assess the individual intrinsic properties without any
influence of the other.
Elbow angular movements were collected with goniometers (Biometrics) at a sampling rate of
50 Hz. Data were analyzed with the method of the cross-wavelet transform (mother function Morlet
7; see Issartel et al., 2006). This method permits us to access to the whole frequency range of
movements as a function of time. Conjointly, it permits us to access to the relative phase between
the two dancers as a function of time for each range of frequency.

Results
Intra and inter-condition comparisons (Friedman test) of individual intrinsic properties
revealed no differences between the two conditions (Alone and Couple) for all groups, indicating
that participants maintained their individual intrinsic properties independently of the conditions. A
Wilcoxon Mann-Whitney analysis on interpersonal coordination showed a statistical difference
between the two groups of participants in Couple condition. Expert dancers were able to perform
34
pluri-frequency coordination at once, whereas non-dancers could not. Relative phase analysis also
revealed a statistical difference between the two groups. In Couple condition expert dancers
performed in-phase and anti-phase coordination. Non-dancers maintain a relative phase around 0°.

Discussion
Maintaining individual intrinsic properties independently of the experimental conditions
reveals that participants possess a reproducible motor signature even in a situation where
participants were free to move without any frequency, amplitude and phase constraints. Our results
also showed that dancers are able to perform coordinations that non-dancers are unable to do. In
modern dance choreography, experts are used to being tuned in to the others. They have learned to
be coordinated with someone else while performing anti-phase coordinations. Moreover,
performing pluri-frequency movements was only observed in the dancers’ group. It seems that this
is a hallmark of dance expertise. With regard to our principal goal – consequences of expertise on
interpersonal co-ordination – results illustrate the emergence of coordination based on expertise.
Our findings point out that Newell’s model for intra-coordination can be extended to interpersonal
coordination: that is, different intrinsic property interactions (interaction between dancers vs non-
dancers) favors the emergence of different modes of coordination.

References
Issartel, J., Marin, L., Gaillot, P., Bardainne, T., & Cadopi, M. (2006). A Practical Guide to Time-frequency Analysis in
the Study Human Motor Behavior: The Contribution of Wavelet Transform. Journal of Motor Behavior, 38(2),
139-159.
Marin, L., Bardy, B. G., & Bootsma, R. J. (1999). Gymnastic skill level as an intrinsic constraint on postural
coordination. Journal of Sports Sciences, 17, 615-626.
Newell, K. M. (1986). Constraints on the development of coordination. In Wade M.G., Whiting H.T.A. (Eds). Motor
Development in Children: Aspects of Coordination and Control (pp 341-360). Dordrecht, Nijhoff.
Schmidt, R. C., Carello, C., & Turvey, M. T. (1990). Phase Transitions and Critical Fluctuations in the Visual
Coordination of Rhythmic Movements Between People. Journal of Experimental Psychololgy: Human
Perception and Performance, 16, 227-247.
3rd International Conference on Enactive Interfaces (Enactive /06) 35
Visual and verbal informational constraints on interpersonal coordination
Michael J. Richardson
Colby College, USA
mjrichar@colby.edu

In the performance of everyday behavior an individual often coordinates his or her movements
with the movements of another individual. Such coordination is referred to as interpersonal
coordination or synchrony. Although this coordination is often obvious, intentional, and overtly
controlled through physical contact (e.g., when individuals are dancing or moving furniture
together) research has demonstrated that movement coordination can occur unintentionally in cases
where an interaction is less physical and more psychological in nature. That is, the movements of
visually or verbally interacting individuals can become coordinated even when the interactional
goal does not explicitly define the coordination itself (Richardson et al., 2005; Shockley et al.,
2003). In highlighting the ubiquitous nature of interpersonal coordination I will present a number of
recent experiments that have explored how visual and verbal information provide a coordinative
medium for interpersonal coordination.

Verbal information and interpersonal postural entrainment


Cooperative conversation has been shown to foster interpersonal postural coordination
(Shockley et al., 2003). We investigated whether
such coordination is mediated by the influence of
articulation on postural sway. Pairs of participants wonder wond wonde wond retur return

were instructed to speak words in synchrony


(Figure 1b) or in alternation (Figure 1a), with
speaking rate (self-paced, slow, and fast) and
word similarity (same words [S], different
words/same stress pattern [DS], different
words/different stress pattern [DD]) manipulated a b
across trials. The results revealed more recurrent 7
6
6
5
postural activity (%REC) the faster a pair of 5
%REC

%REC

participants spoke (Figure 1c) and when the 4


3
3

participants spoke the same words, or words that 2


2
1
had similar stress patterns (Figure 1d). These 1
0 0
findings suggest that coordinated speaking c Fast Natural
Speaking Rate
Slow
d
DD DS
Word Similarity
S

patterns (via the effects that articulation has on


Figure 1. Setup (a & b) and results (c & d)
postural sway) mediate interpersonal postural
for the postural entrainment experiment.
entrainment (Baker et al., in press).

Visual information and interpersonal rhythmic coordination


Previous research has demonstrated that the rhythmic limb movements of interacting
individuals can become intentionally (Schmidt & Turvey, 1994) and unintentionally (Richardson et
al., 2005; Schmidt & O'Brien, 1997) coordinated when visual information about a co-actor’s
movements is available. Further, these results have shown that such coordination is constrained by a
coupled oscillator dynamic-movements are attracted towards inphase (φ = 0°) and antiphase (φ =
180°) patterns of coordination. Recently, we extended this research by examining whether visual
information can unintentionally couple the rocking chair movements of interacting individuals.
Pairs of participants were instructed to rock in chairs positioned side-by-side at their own preferred
tempo (Figure 2a). The participants were instructed to focus their visual attention toward (focal
coupling) or away (no coupling) from the rocking movements of their co-actor. Participants were
also instructed to look directly ahead so that visual information about a co-actor’s movements was
36
only available in the periphery (peripheral coupling). As expected, the pairs exhibited more relative
phase angles around φ = 0° for the focal and peripheral conditions compared to no coupling
condition (Figure 2b). The amount of coordination was also found to be greater for the focal
condition compared to the peripheral condition. These findings provide further evidence that visual
information can couple the rhythmic movements of interacting individuals and that the strength of
the interpersonal coupling is modulated by the degree to which the individuals attend towards the
relevant visual information.
So what visual information couples the rhythmic movements of interacting individuals? In an
attempt to answer this question, we employed an environmental coordination paradigm (Figure 2c)
to investigate whether certain phase regions of an oscillating stimulus’ cycle are more informative
for producing stable patterns of visual entrainment. Participants were instructed to intentionally
coordinate with an oscillating stimulus that had different regions (0°/180°; 90°/270°, 45°/135°) and
amounts (80°, 120°, 160°) of the
stimulus trajectory cycle occluded from 50 No coupling
view. An analysis of the entrainment 40
Peripheral

% Occurence
Focal
variability (SDφ) revealed that occluding 30
the endpoints (the phase regions at 20
0°/180°) resulted in less stable wrist- 10
stimulus coordination compared to when 0
the middle phase regions (90°/270° and 0 90 180
45°/135°) were occluded (Figure 2d). a b Relative Phase Region

When the middle phase regions were Occluded 20 80°


occluded, the stability of the Phase 18
120°
Region 160°
coordination was the same as when none 16 SD
of the stimulus trajectory was occluded. 14
Further, the amount of phase occlusion 12
was found to have no affect on the 10
stability of the visual coordination. 0°/180° 90°/270° 45°/135°
These results indicate that the endpoints c d Locat ion

of a movement are most important for Figure 2. Setup (a) and results (b) for the rocking
stable visual entrainment and that pick- chair experiment. Setup (c) and results (d) for the
up of information from these locations is environmental coordination experiment.
attentionally privileged.

References
Richardson, M. J., Marsh, K. L., & Schmidt, R. C. (2005). Effects of visual and verbal interaction on unintentional
interpersonal coordination. Journal of Experimental Psychology: Human Perception and Performance, 31(1),
62-79.
Schmidt, R. C., & Turvey, M. T. (1994). Phase-entrainment dynamics of visually coupled rhythmic movements.
Biological Cybernetics, 70(4), 369-376.
Schmidt, R. C., & O'Brien, B. (1997). Evaluating the dynamics of unintended interpersonal coordination. Ecological
Psychology, 9(3), 189-206.
Shockley, K. D., Baker, A. A., Richardson, M. J., & Fowler, C. A. (in press). Verbal constraints on interpersonal
postural coordination. Journal of Experimental Psychology: Human Perception and Performance, (in press).
Shockley, K., Santana, M. V., & Fowler, C. A. (2003). Mutual interpersonal postural constraints are involved in
cooperative conversation. Journal of Experimental Psychology: Human Perception and Performance, 29, 326-
332.
3rd International Conference on Enactive Interfaces (Enactive /06) 37
Perception of an intentional subject: An enactive approach
Charles Lenay1, Malika Auvray1,2, Francois-David Sebbah1 & John Stewart1
1
COSTECH, Technological University of Compiègne, France
2
Department of Experimental Psychology, Oxford University, UK
charles.lenay@utc.fr

Classical approaches in the philosophy of mind consider that the recognition of intentionality
is the problem of the adoption of an intentional stance: identifying the behavioural criteria, which
trigger the representation of the perceived object by an internal system of naive psychology
(Premack, 1990; Cisbra et al., 1999; Meltzoff & Decety, 2004). This naive psychology poses many
problems, in particular, how to account for the mutual recognition without falling into the aporias of
the inclusion of representations: I have to have the representation of his representation of my
representation of… his perception. Furthermore, in this approach, the recognition of another subject
is only hypothetical, resulting from an inference based on well-defined perceptions.
However, in our everyday experience as well as in many phenomenological descriptions (e.g.,
Merleau-Ponty, 1945; Sartre, 1943) the lived experience of the presence of others seems certain and
directly perceptive. How in everyday life or through technical devices (such as Internet), can we
have the impression of the presence of another subject, and under which conditions can we
differenciate another person from an object or a program?

Within the alternate framework of ecological or enactive theories of perception (Gibson, 1966;
Varela, 1979; O’Regan & Noë, 2001) the question is not much more advanced since the recognition
of the presence of an intentional subject remains a decision which occurs after the perception of
determined form and movements (Gibson & Pick, 1963). But how to give an account of a direct
perception of the presence of others? How to account for the enaction of the presence of an
intentional subject? Our hypothesis is that it is only possible in a situation of mutual recognition, a
situation in which two subjects perceive themselves mutually.
For example, when we catch someone else’s eyes, it seems that we do not only perceive
particular movements; rather, we see directly that an intentional presence is looking at us. In order
to give an empirical content to this intuition, we conducted an experiment in the framework of
enactives interfaces. In order to do that, we built a technical mediation allowing to control strictly
the perceptive actions and the sensory input received by each subject. Sensory stimulation was
reduced to the bare minimum (one bit of information at each moment) and the perceptive actions
were reduced to the right-left movements in an unidimensional space. This minimalist experimental
paradigm not only facilitates the identification of sufficient conditions for perception; above all, by
reducing the sensory input to just one bit of information at any given moment, it forces the subjects
to externalize their perceptive activity in the form of a trajectory that can easily be recorded, thus
providing good data for analysis.

Pairs of blindfolded participants, placed in separate rooms, interacted through a network of


two minimalist devices. Each participant moved a receptor field along a line via the displacement of
a computer mouse. Two additional objects were introduced in this one-dimensional space: a fixed
object and a mobile object with movements strictly similar to the partner’s receptor field. Each time
one of the subjects encountered an object or the partner’s receptor field, he received an all-or-none
tactile stimulation on his free hand. The task was to click when they judged that the tactile
sensations were due to having met the receptor field of the other participant. Results shows that,
despite the absence of any difference in the sensory stimulation in itself, participants were able to
recognize when the succession of all-or-none tactile stimuli they experienced was due to the active
exploration of another participant rather than the fixed and mobile object.
38

Table 1. Mean percentage (and standard deviation) of clicks, stimulation, and ratio between
clicks and stimulation obtained for the receptor field, mobile object, and fixed object.

Receptor field Mobile object Fixed object

Percentage of clicks 65.9 % ± 3.9 23.0 % ± 10.4 11.0 % ± 8.9

Percentage of stimulation 52.2 % ± 15.2 15.2 % ± 6.2 32.7 % ± 11.8

Ratio clicks / stimulations 1.26 1.51 0.33

Within the alternate framework of enactive theories of perception our experimental study
makes it possible to understand the recognition of another intentional subject as a characteristic
pattern in the sensorimotor dynamics of the perception. These dynamics are essentially conjoint, the
situation of mutual perception forming an attractor which has no spatial stability. Thus, while
maintaining their presence, the other’s glance resists spatial localization. I perceive another
intentional subject not thanks to determined patterns of movements, but rather directly as a
perceptive activity; as something that has the power to affect my own perceptual activity. In this
elementary form of interaction, we see that the collective dynamics constrain the perceptive
activities directly, without having to pass through a preliminary sharing of a common perceptual
content.

References
Csibra, G., Gergely, G., Biro, S., Koos, O., & Brockbank, M. (1999). Goal attribution without agency cues: The
perception of “pure reason” in infancy. Cognition, 72, 237-267.
Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin.
Gibson, J. J., & Pick, A. D. (1963). Perception of another person’s looking behaviour. American Journal of Psychology,
76, 386-394.
Meltzoff, A. N., & Decety, J. (2004). What imitation tells us about social cognition: A rapprochement between
developmental psychology and cognitive neuroscience, In C. Frith & D. Wolpert (Eds.), The neuroscience of
social interaction: Decoding, influencing, and imitating the actions of others. Oxford: Oxford University Press
(pp. 109-130).
Merleau-Ponty, M. (1945). Phenomenology of Perception. New York: Humanities Press, 1962.
O'Regan, J. K., & Noë, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and Brain
Sciences, 24, 939-973.
Premack, D. (1990). The infant’s theory of self-propelled objects. Cognition, 36, 1-16.
Sartre, J. P. (1943). Being and Nothingness: An Essay on Phenomenological Ontology. Translated by Hazel E. Barnes.
New York: Philosophical Library, 1956.
Varela, F, (1979). Principles of Biological Autonomy. New York: Elsevier.
3rd International Conference on Enactive Interfaces (Enactive /06) 39
Enaction, complexity, and multimodal HCI: Experimental, theoretical, and
epistemological approaches

Armen Khatchatourov Julien Lagarde


COSTECH Motor Efficiency and Deficiency Laboratory
Technological University of Compiègne University Montpellier 1
Compiègne, France Montpellier, France
armen.khatchatourov@utc.fr julien.lagarde@univ-montp1.fr

A dynamical foundation of directional relationships in multimodal environments


Julien Lagarde & Benoît Bardy

Julien Lagarde
Motor Efficiency and Deficiency Laboratory
Faculty of Sport Sciences
University Montpellier 1
700 Avenue du Pic St Loup
34090 Montpellier, France
julien.lagarde@univ-montp1.fr

Multisensory integration and segregation are based on distinct large-scale brain


dynamics
Viktor K. Jirsa

Viktor K. Jirsa
Theoretical Neuroscience Group UMR6152, CNRS, Marseille, France
Center for Complex Systems & Brain Sciences, Physics Department,
Florida Atlantic University, USA
jirsa@ccs.fau.edu

Are usages computable?


Bruno Bachimont

Bruno Bachimont
Institut National de l’Audiovisuel
Technological University of Compiègne, France
bbachimont@ina.fr

Some epistemological considerations on relation between cognitive psychology


and computer interfaces (on the example of tactile-proprioception synergy)
Armen Khatchatourov & Annie Luciani

Armen Khatchatourov
COSTECH
Technological University of Compiègne
Compiègne, France
armen.khatchatourov@utc.fr
40
A dynamical foundation of directional relationships
in multimodal environments

Julien Lagarde & Benoît Bardy


Motor Efficiency and Deficiency Laboratory, University Montpellier 1, France
julien.lagarde@univ-montp1.fr

Background
Enactive interfaces may benefit from incorporating some of the complexity present in natural
environments. Stoffregen and Bardy (2001) emphasized the relevance of intermodal invariants, e.g.,
relations across sources of stimulation. Here we studied the directionality, or causal relations,
between the elements composing a scene. Different modalities and movement can be bound into a
single coherent functional unit, however this goal directed assembly can be parametrically
destabilized and breakdown (Lagarde & Kelso, 2006). Do the causal relations between modalities
actually determine the stability of multimodal coordination pattern? We consider the simplest case
of an agent immerged in an environment in which at least two “elements” (e.g., human, or moving
object) move independently but are interacting. To focus on multimodal coordination, let’s consider
that one dynamic element present in the environment stimulates the agent via modality A, and the
other via modality B. Consider for instance driving a car in a traffic jam, the cars surrounding the
agent are indeed interacting autonomous systems, some are visually perceived, others just heard.
Most intriguing cases come from less asymmetric interactions, when one “element” is the driver,
and the other the slave. We present a basic framework that define operationally these dynamical
relations, and select the key parameters values that allow a sounded implementation.
The general approach of “causal” relations between two independent systems is to examine if
the prediction of the behaviour of one is improved by incorporating information from the other
(Granger, 1969). This approach is applied to systems having periodic behaviour (Rosenblum et al,
2002). It was shown that perfect synchrony due to interaction prevents the detection of
directionality. Accordingly, to obtain a departure from perfect synchrony we introduced in each
element a random perturbation.

Method
We numerically simulated 2 coupled stochastic non-linear oscillators (equation 1) to get
pairs of antiphase locked periodic stimuli (figure 1, d). The relative strength of couplings was
varied to introduce directional interactions. The index of the direction is [(ε1 - ε2)/ ε1 + ε2], and
equals -1 if Y drives X, 1 if X drives Y, and 0 for equal strength. We used van der pol limit cycle,
which respond to additive Gaussian random perturbations by fluctuations of the period, a key
variable for our experiments. To find the smallest noise that allows reliable detection of the
direction, we varied the strength (Q) of noise (0 to 0.4), and the direction index (0 to1). An
algorithm was applied to the time series to verify that the directionality was reliably identified
(Rosenblum et al 2001).

∂x1/∂t = x2
∂x2/∂t = (1 - x12)x2 - x1 + ε1 (x2 - y1) + sqrt(Q)ξ
∂y1/∂t = y2
∂y2/∂t = (1 - y12)y2 - y1 + ε2 (y1 - x2) + sqrt(Q)ξ Equation (1)

ε1 and ε2 are the coupling strengths, ξ is a gaussian random perturbation


3rd International Conference on Enactive Interfaces (Enactive /06) 41

A B

C D

Figure 1. Results from the simulation, ramping the noise strength and the directionality. (a)
The mean relative phase between the oscillators, (b) index of synchronization, (c) index of
direction, and (d) exemplars of time series of the positions and corresponding distribution of
the relative phase.

Results
The simulations shown that the directionality shifted the mean phase difference between the
oscillators (figure 1, a). It indicated further that finite noise strength can decrease the coherence of
the synchronization below a satisfactory value of an index of synchronization <0.6 (figure 1, b) (see
Rosenblum et al 2001). The negative direction was reliably detected for a noise strength (Q) > 1.2
(figure 1, c).

Discussion
Our approach allowed us to define and select operational parameters to introduce preferred
directions of interaction. A safe choice is to make a compromise between noise and difference
between couplings strength for the particular system chosen. The simulated components can be used
to animate elements of a multimodal scene. Preliminary experimental evidence of the effect of
directionality on human’s behaviour is under way.

References
Granger, C. W. J. (1969). Investigating causal relations by econometric models and cross-spectral methods.
Econometrica, 37, 424-438.
Lagarde, J., & Kelso, J. A. S. (2006). The binding of movement, sound and touch: Multimodal coordination dynamics.
Experimental Brain Research, 173, 673-688.
Rosenblum, M. G., & Pikovsky, A. (2001). Detecting direction of coupling in interacting oscillators. Physical Review E,
64, 045202.
Stoffregen, T. H., & Bardy, B. (2001). On specification of the senses. Behavioral and Brain Sciences, 24, 195- 261.
42
Multisensory integration and segregation are based
on distinct large-scale brain dynamics

Viktor K. Jirsa
Theoretical Neuroscience Group UMR6152, CNRS, France
Center for Complex Systems & Brain Sciences, Physics Department,
Florida Atlantic University, USA
jirsa@ccs.fau.edu

Integration of information received by the individual sensory systems provides us a coherent


percept of the external environment (Stein and Meredith, 1993). Such multisensory integration in
the brain enhances our ability to detect, locate and discriminate external objects and events. Sensory
inputs that are temporally, spatially and/or contextually congruent are more effective in eliciting
reliable behavioral responses than incongruent inputs (Calvert, 2001). Despite a few important
studies on timing parameters influencing multisensory processing (Bushara et al., 2001), our
understanding of the underlying neural processes involved in multisensory integration for timing is
still limited.

To investigate the effects of relative timings of different sensory signals on multisensory


phenomenona, we developed a rhythmic multisensory paradigm in the audio-visual domain and
performed first a behavioral study followed by a functional Magnetic Resonance Imaging (fMRI)
study (Dhamala et al., 2006). Our paradigm was motivated by the theoretical investigation of
dynamical states of interacting phase oscillators, which shows that two coupled periodic oscillators
driven under weak impact of an external stimulus display characteristic dynamic states of
synchrony, asynchrony and a non-phase locked state, also known as phase wrapping or drift (Kelso,
1995). Under variation of two control parameters, the oscillator system can be guided through the
parameter space displaying the various dynamic behaviors, including multistability and phase
transitions. In our experimental paradigm, the two control parameters correspond to the two timing
parameters stimulus onset asynchrony (SOA) (Δt) and stimulation rate (f). The theoretical sudy
predicts that temporally congruent multisensory stimuli can be expected to cause a percept of
synchrony, incongruent stimuli to cause a percept of asynchrony or a third possible perceptual state.
This third perceptual state will be referred as drift or neutral percept because it represents failure in
multisensory integration and is qualitatively different from the percept of asynchrony.

In the behavioral study, the sixteen subjects reporting a given percept quantified the
perceptual strength in the timing parameter space. Stimulation rates f were 0.5, 1, 1.5, 2, 3, and 3.5
Hz and SOAs Δt were -200, -150, -100, -50, 0, 50, 100, 150, 200 ms. The subjects’ behavioral
performance indicated that the perceptions of auditory-visual stimuli changed qualitatively
according to the values of the control parameters Δt and f. We found the existence of four distinct
perceptions: perception of (i) synchrony, perception of asynchrony via (ii) auditory leading visual
stimuli (AV) or (iii) visual leading auditory stimuli (VA), and (iv) changing order of stimuli (drift),
in which subjects could report no clear percept. Below the stimulation rates of 2.5 Hz, the
perceptions of synchrony and asynchrony persist, whereas above 2.0 Hz, there is a region of drift, or
no fixed percept. Thus, our major behavioral finding confirms the theoretical predictions and
identifies three distinct perceptual states: synchrony, asynchrony and drift.

In the neuroimaging study, we aimed at identifying the neural correlates of the three
perceptual states. We reduced the number of conditions for practical purposes and conducted the
fMRI experiment using only those conditions that had produced highly consistent effects (stable
percept or no fixed percept) among 13 subjects. The fMRI experiments consisted of three functional
runs of sensory (auditory, visual and auditory-visual) stimulation, and rest conditions in a random
order on-off block design. A 1.5 Telsa GE Signa scanner was used to acquire T1-weighted
structural images and functional EPI images for the measurement of the blood oxygenation level-
3rd International Conference on Enactive Interfaces (Enactive /06) 43
dependent (BOLD) effect. The data were preprocessed and analyzed using Statistical Parametric
Mapping (SPM2). The main effect of crossmodal processing was obtained by combining all the
bimodal stimulation conditions. The random effects analysis of combined bimodal conditions
versus rest conditions showed bilateral activations at p < 0.001 in the inferior frontal gyrus, superior
temporal gyrus, middle occipital gyrus and inferior parietal lobule. Interestingly, a negative contrast
between bimodal conditions relative to rest was revealed in the posterior midbrain in the region of
the superior colliculus. We also calculated pairwise cross-correlations between these average
timeseries for asynchrony, synchrony and drift. The cross-correlation reflects the levels of
interpendence between these areas and provide insight into the reorganization of the network as the
perceptual states change. Our fMRI results show that prefrontal, auditory, visual, parietal cortices
and midbrain regions re-group to form sub-networks responsive to these different perceptual states
(see figure 1 below).

Figure 1. The specific sub-networks are shown which are activated in correlation with the
distinct percepts Asynchrony and Synchrony. In the absence of a distinct percept identified with
Drift, a rudimentary network remains.

Our findings support the notion of a widely distributed network of brain areas in multisensory
integration and segregation for timing, and further highlight the specific roles of the inferior parietal
cortex and superior colliculus in multisensory processing (Bushara et. al, 2001). In particular, our
study shows that networks of brain areas, rather than any individual site, are involved in crossmodal
processing although the components of these networks may be differentially responsive to
synthesizing different types of crossmodal information. Percepts of Synchrony and Asynchrony are
associated with different activations, whereas the percept of Drift activates only a basic rudimentary
sub-network (see figure 1). Such implies that the percept Asynchrony has to be interpreted as an
independent percept rather than just the loss of the percept Synchrony.

References
Bushara, K. O., Grafman, J., & Hallett M. (2001). Neural correlates of auditory-visual stimulus onset asynchrony
detection. Journal of Neuroscience, 21, 300-304.
Calvert, G. A. (2001). Crossmodal Processing in the human brain: insights from functional neuroimaging studies.
Cerebral Cortex, 11, 1110-1123.
Dhamala, M., Assisi, C. G., Jirsa, V. K., Steinberg, F. L., & Kelso, J. A. S. (2006). Multisensory Integration for Timing
engages different brain networks. Neuroimage, in press.
Kelso, J. A. S. (1995). Dynamic Patterns: Self-Organization of Brain and Behavior. MIT Press, Cambridge, MA.
Stein, B. E., & Meredith, M. A. (1993). The Merging of the Senses. MIT Press, Cambridge, MA.
44
Are usages computable?
Bruno Bachimont
INA, France
bbachimont@ina.fr

Most systems are designed by paying a more or less close attention to users and their
behaviour. Sometimes there is a user model that attempts to capture user behaviour in the terms of
the system. There may be also trials to model context: designers hope their system will remain
adapted to its environment because it follows a model telling him how context may evolve and how
it should react accordingly.

Such approaches try to integrate users in the system, users becoming one component among
others. There is no place for a dynamic and open interaction between the user and the system, where
the final behaviour is unpredictable from the point of view of the system or from that of the user.
But the purpose of an interface is not to close the user within the frame of the system, but to let her
build her own world, through the interaction with the system.

As consequence, the issue is to set up the conditions for the user to singularize herself,
building herself through interactions with systems. Singularization should be understood in a strong
sense, where the result is not computable from the initial conditions. There is something new,
radically new, that emerges from the system – user interaction. This may be called “enaction”.
Enaction is then the term coined to express the fact that user behaviour is not the simple
consequence of a computer-based system, but the construction of a new future from initial
conditions set up by systems and individuals.

Most of interfaces are not enactive because they expect the user to react according to
precalculated rules or patterns. The user is absorbed in the system world while an enactive point of
view would expect systems to be integrated and adapted to the user world. In fact, there are neither
user world nor system world since we are dealing with interfaces, that is, interfaces that provide the
conditions to build a new, common and shared world. But the true reality is not so simple: systems
are linked in complex networks where resources and information are disseminated and shared.
Users belong to social worlds with complex relationships. The last evolutions of our industrial era
reveal that the basic trend of technical conceptions is to consider users as an element or a piece of
the whole technical system. Their freedom is considered as a free and acceptable variation among
several possibilities planned by the system. But singularization is not variation: while the user
builds her own world, she is not choosing a value among several predefined possibilities: she builds
something radically new. Therefore, interfaces should allow the users to escape from the choices
predefined by the technical design, in order to integrate the technical possibilities into their free
interpretation and construction.

Usages are not computable. The issue is then to allow non-computability emerge from
computability, to let unpredictable enactive behaviours emerge from computable patterns and
formally designed systems.
3rd International Conference on Enactive Interfaces (Enactive /06) 45
Some epistemological considerations on relation between cognitive psychology
and computer interfaces (on the example of tactile-proprioception synergy)

Armen Khatchatourov1 & Annie Luciani2


1
COSTECH, Technological University of Compiègne, France
2
ICA Laboratory, INPG-Grenoble, France
armen.khatchatourov@utc.fr

Introduction
The sense of touch is of particular importance for virtual environments as it is supposed to
offer tangibility to virtual objects. However, to do so, we have to deal with an epistemological
difficulty: on one hand, it seems that we need an advanced knowledge on human perception to
improve these devices, while, on the other hand, the acquisition of this knowledge supposes already
an advanced development of interfaces as measuring devices.
But what are we measuring when, in cognitive psychology, we use for example a
forcefeedback/tactile device to explore a tactile dimension in haptic perception? As it will be
explained below, the categorisation of the tactile dimension is problematic. To which extent can we
then postulate the knowledge of the sense of touch “as it is” in human? What is an epistemological
status of these devices?
A contrario, when implementing a new interface for gesture interaction, to which extent
can/should we reproduce what we know on the “natural” sense of touch?
Furthermore, to which extent the emergence of the experience for the user is determined by this
knowledge of the sense of touch? Can we stretch to say that finally we are studying more the
emergence of new aspects of interaction than a “real touch as it is”?
To sum up: On which epistemological basis to study the emergence /the perception of objects
in VR? Are the interfaces the goal of the studies on human perception or are they a mean for such
studies?
In conclusion, we will present in which sense an enactive approach could overcome this
epistemological difficulty.

Tactile-forcefeedback interfaces
Let’s consider the relation tactile – force perception, as studied in cognitive psychology by
means of interfaces, more in details. We will present some points showing that this question still
needs a basis on which a categorisation of the relation between the tactile and the force perception
could be made.
(i) when investigating the synergy between tactile and force perception, the difficulty is
considerable. In certain situations, tactile and force information seems to be redundant, in others
complementary. What is called “tactile” in real situations is not clearly defined and tightly linked to
the force perception, and there is clearly a lack of hypothesis concerning the relation between tactile
and force perception.
(ii) various information could be considered as tactile. This variety leads to the fact that, most
often, only one or two aspects or “components” will be chosen in order to be implemented in HCI.
(iii) even inside a chosen aspect, the tactile stimulation is relatively poor and not comparable
to the richness of the tactile sensation in real situation,
(iv) these devices are conceived on the basis of hypothesis which are, at the actual stage,
“exploratory”.
(v) this implementation can proceed only by a metaphor, been given the fact that the tactile
interface can not reproduce the richness of the tactile sense. These metaphors can be either
functional, either physiological (or a mix of both). Thus, what is called “tactile” in an interface is
essentially depending on the model and metaphors implemented.
(vi) and last but not least, the tactile can not be clearly separated in experimental conditions since it
is still stimulated through the grip of the FFD (in other words, the conditions of tactile stimulation
46
are changed, but not absent: contrary to the “natural” situation, the user’s hand is in contact with the
rigid grip (in case of FFD only) or grasping a rigid grip with only local tactile stimulation.) This
seems sufficient to some extent for the perception of the texture but may not be sufficient in other
situations (for ex. in prehension tasks, tactile as control of the pressure applied by the user or as
detection of a relative movement).
(vii) Obviously, a knowledge on some aspects of perception can improve the interaction by
addressing critical limitative points (for example physiological limitations) in some chosen and
controlled situations, and thus to improve haptic interfaces in precise conditions of certain critical
tasks. But this does not yet prejudge on the emergence of the sense of the object for the user as such
sense is a result of the interaction, and can not be understood when isolating the human “as it is” on
the basis of an objectivist position. We have to be attentive to the fact that in this case, the
knowledge on human perception is model/metaphor-device- task depending.
The separation between the tactile sense and the force perception is not in human, but rather in the
specific devices : in natural perception, tactile sensory feedback and force-feedback are difficult to
separate and they always go together. Thus, the categorisation can be made only on the basis of the
devices we have today at our disposal, and which aim to explore the role of the tactile dimension by
separating it.

The points above show that there is a gap between what one thinks the tactile “is” in human
perception and what one wants/can implement. Thus, from the epistemological point of view, the
starting point should be the necessity to be attentive to all of these points: the first thing is to do the
spadework on this field and to find out on which criterion the categorisation of the relation between
tactile and force perception can be made.

Conclusion
From an epistemological point of view, several consequences could be discussed.
1. If the understanding of the sense of the touch itself is conditioned by studies using specific
interfaces, then the interfaces play the role of “categorisation devices”. In a certain sense, the
situation is similar to a well-known metaphor of “telephone exchange” which was used to “explain”
the functioning of the central nervous system, as well as to the metaphor of computer in
computational paradigm of cognitive processes. However, here the devices go behind a simple
metaphor or an explicative model as they serve as measuring devices.
2. Furthermore, if the sense of touch cannot be approached otherwise than through the
situation of interaction, it means that this sense is mainly constituted by the interaction, and requires
an epistemological shift abandoning the objectivism on which the classical cognitive psychology
was originally based (i.e. postulating the knowledge of human perception as it is).
3. Enaction in a strong sense, as we understand it (i.e. co-arising of the subject and the world;
grounded on the phenomenology as thematisation of lived experience, and “completed” by a
recognition of the role of technics in the process of “enacting” of a world of lived experience),
could provide such a shift.

In enactivist framework, the relation to the environment as well as the knowledge about this
relation are constructed. If we consider the senses as the instantiation of structural coupling (which
is “a historical process leading to the spatio-temporal coincidence between the changes of state” of
the participants (Maturana, 1975), if we consider the fact that they can be affected by the means of
such coupling (here the interfaces), then the notion of enaction can deal with a difficulty to conceive
the senses as something constructed, as well in the sense of their evolution as in the sense of the
knowledge on them. Then, it resolves the difficulty which we described above as the ambiguity
between the use of interfaces in cognitive studies and the necessity of such studies for the design of
interfaces; and it even can guarantee an epistemological coherence of the complementarity between
them. But the price to pay is to endorse a sort of radical constructivism taking into account the
technical dimension of the interaction, and to finally renounce to the objectivism.
3rd International Conference on Enactive Interfaces (Enactive /06) 47

FREE TALKS
48
3rd International Conference on Enactive Interfaces (Enactive /06) 49
Role of the inertia tensor
in kinaesthesia of a multi-joint arm reaching movement

Delphine Bernardin1, Brice Isableu2, Benoit Bardy3 & Paul Fourcade2


1
McGill University, Canada
2
University of Paris XI, France
3
Motor Efficiency and Deficiency Laboratory, University Montpellier 1, France
delphine.bernardin@music.mcgill.ca

Introduction
Humans routinely perceive and control the direction of their arms without looking at them. An
important question is what kinaesthetic information is used to perform such activities successfully.
Past work has demonstrated that this ability is tied to the arm’s inertial eigenvectors, invariant
mechanical parameters corresponding to a limb’s axes of rotational symmetry (Riley et al., 2005).
Previously, these questions have been investigated in pointing tasks involving arm movements
limited to only one degree of freedom (Pagano & Turvey 1995; Pagano et al., 1996; Garrett et al.,
1998). Indeed, the eigenvector (e3) is a directional invariant in kinaesthesia of the limb position in
the egocentric space (Garrett et al., 1998) and the mono-joint arm reaching movement orientation
(Pagano et al., 1996). Nevertheless, the contribution of e3 eigenvector is still a matter of debate
(Craig & Bourdin, 2002; van de Langenberg et al., 2005). A recent study has shown that this
information contribute to the perception and control of the final direction of unconstrained multi-
joint arm reaching movements but that its exploitation varies from one person to another (Bernardin
et al., 2005). The study of the final performance indicated two behaviours: one corresponding to the
exploitation of e3 (alignment of the eigenvector with the target; TS) and the other to a compensation
strategy of the alteration of the arm mass distribution (alignment of the finger with the target; OS).
The present experiment examines whether the e3 eigenvector is exploited during a pointing
trajectory.

Method
Participants (N=13) held an apparatus that allowed their wrist to be fixed. The mass
distribution of the right arm was modified via a cylinder placed on this apparatus, at 28 cm from the
centre of the hand perpendicularly at the forearm axis. Masses were located either symmetrically
(S) or asymmetrically -Right (R) and Left (L) side of the forearm- thus breaking its alignment with
the z rotational axis of the shoulder. The task was to point as accuracy as possible toward a vertical
line. The target was locates in such a way that the shoulder formed a 90 degree angle with the trunk
when the arm was fully extended. Participants were required to maintain the forearm/object system
on a horizontal plane during the reaching movement. No physical limitation was used to constrain
arm’s movement in this plane, so that resistances to rotation were conserved. A 460 VICON motion
analysis system was used to capture the movement at 60 Hz. The final performance, the trajectory
curvature, and the kinematics of finger, elbow (β) and shoulder (α) angles were computed. The
method used to compute the eigenvectors and eigenvalues of principal moments of inertia, Ixx, Iyy,
Izz was developed in Bernardin et al. (2005). The deviation of e3 eigenvector in relation with the
target-acromion axis was calculated at the maximal peak of hand’s velocity (θ) and at the end of the
pointing movement. In this last case, the e3 eigenvector deviated from the longitudinal axis of the
extended arm, on average, by 1.5 (M1), 2.9 (M2), and 6.2 (M3) degrees in the 100g, 200g and 500g
conditions, respectively.

Results
The results showed a significant difference between the β angle (F (6, 72) = 14.44, p<0.05)
and the θ angle (F (6, 72) = 2.59, p<0.05) at the maximum peak of finger velocity. In addition, the
inter-joint coupling time between the maximum peak of α and β angle velocity were significantly
different (F (6, 72) = 3.83, p<0.05). The average interval was smaller for the left conditions of mass
50
distribution than for the right conditions. A Cluster analysis performed on the maximal curvature of
the hand trajectory indicated the existence of three behaviors: a left curve trajectory (N = 4; Mean =
25.07 mm; SD = 11.48), a straight line trajectory (OBS = 4; Mean = -0.83 mm; SD = 11.54), and a
right curve trajectory (OBS = 5; Mean = -45.04 mm, SD = 11.28).
A general regression indicated a significant relation between predicted and observed end
pointing performances (y = 0.25x, R² adjusted = 0.98, p<0.05). A Cluster analysis performed on the
individual slope values revealed significant differences between four different strategies (TS, OS
and two mixed strategies) for the lowest and intermediate masses conditions: 1.5° (F (3,9) = 34.63;
p<0.05) and 2.9° (F (3,9) = 7.91; p<0.05), but not for the larger mass condition (p>0.05).

Conclusion
The experiment indicated 1) the dependency of arm’s directional control on the e3 eigenvector
parameters, and 2) the presence of a large inter-individual variability suggesting the existence of
different strategies to control the end and the trajectory in a multi-joint arm reaching movement.
Thus, the motor system appears to be able to use muscular co-contraction to compensate the
alteration of mass distribution (Gribble et al., 2003). This study validates the exploitation of the
inertia tensor in unconstrained multi-joint arm movements. Thus, the coordination of the segments'
masses during reaching can be simplified by using an invariant available online in the inertial flow
detected by our haptic system. The results also provide evidence for the existence of sensori-motor
alternatives in exploiting kinesthetic information.

References
Bernardin, D., Isableu, B., Fourcade, P., & Bardy, B. G. (2005). Differential exploitation of the inertia tensor in multi-
joint arm reaching. Experimental Brain Research, 167(4), 487-495.
Craig, C. M., & Bourdin, C. (2002). Revisited: the inertia tensor as a proprioceptive invariant in humans. Neuroscience
Letters, 317, 106-110.
Garrett, S. R., Pagano, C. C., Austin, G., & Turvey, M. T. (1998). Spatial and physical frames of reference in
positioning a limb. Perception and Psychophysique, 60, 1206-1215.
Gribble, P. L., Mullin, L. I., Cothros, N., & Mattar, A. (2003). Role of cocontraction in arm movement accuracy.
Journal of Neurophysiology, 89, 2396-2405.
Pagano, C. C., Garrett. S. R., & Turvey, M. T. (1996). Is limb proprioception a function of the limbs' inertial
eigenvectors? Ecological Psychology, 8, 43-69.
Pagano, C. C., & Turvey, M. T. (1995). The inertia tensor as a basis for the perception of limb orientation. Journal of
Experimental Psychology, 21, 1070-1087.
Riley, M. A., Shaw, T. H., & Pagano, C. C. (2005). Role of the inertial eigenvectors in proprioception near the limits of
arm adduction range of motion. Human movement science, 24, 171-183.
van de Langenberg, R., Kingma, I., & Beek, P. J. (2005). Mechanical Invariants in the Perception of limb Orientation:
A new Hypothesis. 13th International Conference on Perception and Action, pp. 62. 5-10. July, Monterey,
California, USA.
3rd International Conference on Enactive Interfaces (Enactive /06) 51
Considering the normalized vertical reach performance for consistently
controlling virtual manniquins from full-body input

Ronan Boulic, Damien Maupu & Daniel Thalmann


Ecole Polytechnique Fédérale de Lausanne, Switzerland
Ronan.Boulic@epfl.ch

The present study discusses the results of a reaching test where subjects with different statures
were instructed to reach successive targets located at ten distinct relative heights. Our preliminary
results are in accordance with prior related findings suggesting that the vertical reach ability is very
similar across subjects when normalizing the target height by the subject height. An important
resulting characteristics is the performance variation as a function of the normalized height that we
examine in the light of our intended field of application, namely Virtual Prototyping. Our objective
is to propose a consistent full-body postural control of a virtual manniquin that may have a different
stature from the user of the system. Therefore the existing motion capture approaches are evaluated
for that purpose. To conclude we propose the concept of the “Alter-Body Experience” that, we
postulate, establishes an intuitive mapping between the user and the manniquin reach abilities.

Motivation
Studies about human full-body reach ability are rare due to the complexity of the muscolo-
skeletal system ; we can mention (Carello et al., 1989) comparing the reachability prediction made
from the sole visual information to the actual reaching distance, and (Robinovitch, 1998) studying
the prediction of full-body limit reach. Our interest for the full-body postural control of a virtual
manniquin led us to search for studies establishing similarities across subjects when normalizing the
reach target height by the body height. Despites some positive indices from the literature on
egocentric distance (Loomis & Knapp 2003) we couldn’t find a clear answer to this question in
prior works. For these reasons we have conducted the small experiment described hereafter.

Full-body vertical reach experiment


Seven male students aged 25 to 30 participated to the study. Their body height were evenly
distributed within [1.68m, 1.91m]. None had any counter-indication for standing-up over the
duration of the study. From now on, we denote the total height of a subject with the symbol H. Each
subject was standing with his toe tip at a distance H/3 in front of a 2.4m high backlit screen. A
white target (0.1m diameter) was displayed at different normalized heights, from 0.2 H to1.1 H, by
increment of 0.1 H. The presentation order was randomized among reach series and among
subjects. A reach serie consisted in 20 successive randomized reach tasks (twice for each
normalized height) organized as described now. The subject was holding a joypad device with both
hands for three purposes: 1) ensuring a clear starting and intermediate posture with both hands in
contact with the abdomen, 2) ensuring the symmetry of the reach posture, 3) allowing the subject to
signal the end of the task by pressing a button. Each reach task required to bring the joypad device
four times on the target and back to the starting posture while counting each contact with the screen.
This procedure was intended to reduce the performance variability of each subject by offering a
sufficiently long activity which duration was regulated by the verbal activity. In addition they were
instructed to perform the activity at a uniform and gentle rhythm. A six seconds pause was made
between each target and a seated two minutes pause was made between each reach tasks series (3
series were done after one serie serving as a training phase). The subject were free to flex the legs or
to reach the target on their toe.

Results
Figure 1a shows the average task durations normalized by the average task series duration
(with one standard deviation interval) as a function of the normalized height for one subject. A clear
augmentation of about 25% can be observed on both ends of the normalized heights interval. This
52
pattern is confirmed across all subjects as shown on Figure 1b (only the normalized average
durations are drawn). One subject displays an even more important variation while still respecting
the pattern.

Figure 1. (a) single subject with one standard deviation, (b) all subjects

Exploitation for the consistent full-body control of a virtual manniquin


We can deduce one strong indication from figure 1 pattern for the correct full-body control of
a virtual manniquin by a user having a different height. First, when both the user and the manniquin
are controlled through goals having the same normalized height, they consistently show similar
normalized performances. Conversely, normalized performances can differ up to 25% (over the
studied interval) if the normalized target heights are different. This typically happens if the raw
sensor data measured on the user is exploited to guide the manniquin posture.
A first solution to alleviate this problem is to scale the sensor distance data by the ratio R =
manniquin height / user heigth. The scaled sensor data acts then as goals having the same
normalized height as the user sensor data. However, the user cognitive load might be too important
when the heights difference is high or when the environment is complex because the virtual
manniquin still exists as a distinct entity within the virtual environment.
The “Alter-Body Experience” we want to explore in our future work is instead to scale the
virtual environment with 1/R so that the user experiences the interaction as if he/she were the
manniquin itself. We conjecture that such an approach offers an optimal ease of control for the user.
In this way the mapping between the user body and the manniquin body is transparent and
representative of a truly enactive interface.

References
Carello, C., Grosofsky, A., Reichel, F., Solomon, H. Y., & Turvey, M. T. (1989). Visually perceiving what is reachable.
Ecological Psychology, 7(1), 27-54.
Loomis, J. M., & Knapp, J. M. (2003). Visual perception of egocentric distance in real and virtual environments. In L. J.
Hettinger and M. W. Haas (Eds.), Virtual and Adaptive Environments (pp. 21-46). Mahwah NJ: Erlbaum.
Robinovitch, S. N. (1998). Perception of postural limits during reaching. Journal of Motor Behavior, 30(4), 352-358.
3rd International Conference on Enactive Interfaces (Enactive /06) 53
Computer technology and enactive assumption

Dominique Dionisi1& Jacques Labiche2


LITIS EA 4051 INSA
1
University of Rouen, 2University of Le Havre, France
Dominique.Dionisi@insa-rouen.fr

This communication proposes an operationalisation of the theory of enaction with the data-
processing tool which can be synthesized as follows: to characterize “software processes” implied
in “experiential processes implying cognitive processes”. Computers were designed out of the real
world primarily to validate theoretical assumptions but they came to upset it in a completely
unexpected way, and even to revolutionize it. The introduction of the computer into the world,
initially as computing power, then as machine to process the data, to model and simulate, and
finally as machine for communication, deeply modified it. Alan Turing, in 1936, proved that there
are mathematical problems which can be solved by no formal mechanics, and illustrated this by “a
simple method which has all the properties of modern data processing, and which would be named
later the machine of Turing” (Fitoussi, 2005). Computer was thus designed on the model of the
machine of Turing to validate or invalidate the assumption of the decidability of mathematics.
In parallel, a small research group, also very active in the field of sciences of the spirit, was at
the origin of cybernetics, then cognitive sciences. The starting point of this movement goes up with
the publication of an article of McCulloch and Pitts (1943). This article suggests that logic is at the
base of the operation of the brain. This one is described as composed of components or neurons
incarnating the logical principles. Thus, the fundamental assumption is not only a similarity of
behavior between the neuronal machine (alive human brain) and the computer, but moreover, an
identity in the same principles of operation: the logic based on an finished states automaton. The
tendency being to think that the conscience proceeds exclusively by computation of symbols led the
first researchers in artificial intelligence to identify human cognition and computer according to the
equation below. Cognition - > computation of symbols; Computer -> computation of symbols;
therefore, cognition = computer. This vision is upset by the theory of enaction (Varela, 1996b, p.
34,36) This theory postulates that cognition is the prerogative of living and that it would not have
identity with the computer. The equation of identity showed which limits should not be crossed as
regards bringing together thought and computer. Our proposal is based :
- on the one hand on the ontological nature of enactives processes
- in addition, on the difference between bearing formal calculation on symbols, such as the practice
of this machine with steps of discrete times (Sigault, 2002) which is the computer, and enactive
cognition, by nature continues, active and autopoietic which is clearly thought.
To claim that it is possible to reconsider the computer and its operation by analogy and
transposition of the new models of living and cognition seems to repeat the same wanders as those
of the first connection between human and machine. The effort carried these last years much on the
development of interfaces of communication between the man and the machine and on the attempt
at an increasingly pushed integration of the user to software operation. We propose a particular
posture within this framework.
It is a question for us to link clearly our step of design to this double characteristic: the
experiment being held to some extent “in silico”, we must determine the conditions of this
unfolding, while taking care not to force the process itself which will be constituted with the wire of
the execution. It is thus a step hermeneutics which is essential, relating in turn and at the same time
to the epistemological and ontological aspects corresponding respectively to the installation of the
experimental conditions and to the experiment itself in its active progress with the execution. The
realization of this experiment is carried out then by self-constitution of the experiment actively self-
organized during its cycle of life according to the disturbances generated by the couplings in which
it is implied.
54
Current technologies, in particular oriented technologies to objects, allow to consider this
double constraint (hermeneutics); to organize (epistemology) a system which allows the execution
of a process self-made up (ontology), without devoted task (no teleological), ready to be coupled
with a high level of complexity environment, in an not-informative mode of interaction (no flow).
And this, not to copy the living and the enactive in the machine, but to create the conditions for
implementation of processes compatible with an implication as higher stated. The dominant features
such artificial systems intended to take part in the cognitive experiment relate to their capacity of
coupling due to their properties of self-organization. A second interesting characteristic is to be able
to represent in an available form by the machine and its programs (internalisation) the properties of
indissociability linked to the enaction while unifying in autonomous entities or ontological
processes perception/action and internality/externality. The essential difference between human and
machine implied in the cognitive treatment holds in that the machine proceeds by computation of
discrete symbols (Pignon, 1992, p.103), whereas the human experiments enactively, in continuous
historical processes.
This question is not any more within the competence of the “resolution of problems”, but of
that of the “definition of problem”, such as Varela proposes (Varela, 1996b, p. 120). And this
access, within the framework of the characterization in progress, is achieved by coupling of the one
and other on this particular environment. … the intelligence is not defined any more as faculty to
solve a problem, but as that to penetrate a divided world. ” (Varela, 1996b, p. 112, 113). We
propose to set up the structural coupling which makes it possible to integrate the software technique
into the cognitive practice. The coupling is carried out so much between process than with the field
of knowledge, this one constituting the common environment to both processes. It is this field of
knowledge which constitutes for us the world of significance about which Francisco Varela speaks.
The essential questionings relate to:
- the focusing related to a dialogue between the user and the machine via what one names
man/machine interface; reconsidered in term of coupling between processes;
- the functional vision of the systems implying the existence of input and output, and composed by
functions of transformations of the input into output. Reconsidered in term of disturbance and
coherence.
These questionings, within the framework of the enaction become possible by the
implementation of structural couplings between systems, and system and environment; and by the
definition of problem, in an ontological way by the system itself, within the framework of its
coupling with its environment made up of the knowledge field in which it is performed
The human/machine interface is extended here to the forms shared between the cognitive
process and the software process. Divided does not mean transmitted one reciprocally to the other,
but “known by the one and other”. The text processing is certainly one of the most beautiful
successes of software product making it possible human to go beyond than it can make without this
technical tool which upsets its practice. Many fields can be modified by the provision of tools of
this type. Search for information on the Web, for example, would be improved by the contribution
of the principles which we have just stated. The integration of an interactive engine, being based on
the reinforcement of the coupling, and on the installation of a continuous process self-made up by
co-construction of concepts, without a priori, starting from the resources available on the servers,
would enrich searches largely.

References
Fitoussi, J-P. (2005). Recherche, donner du temps au temps. Le Monde, 05 octobre 2005.
Havelange, V., Lenay, C., & Stewart, J. (2003). Les représentations : mémoire externe et objets techniques. Intellectica,
35, 115-131.
Lemoigne, J. L. (2006). Intelligence de la complexité : déployons l’éventail de la complexité générale. Inter lettre
Chemin faisant MCX-APC, 33.
Pignon, D. (1992). Les machines molles de von Neumann. Postface de L’ordinateur et le cerveau, texte de J. von
Neumann traduit de l’Américain par Pacal Engel. Paris: Edition la découverte.
3rd International Conference on Enactive Interfaces (Enactive /06) 55
Sigault, 0. (2002). Automatisme et subjectivité : l’anticipation au cœur de l’expérience. Thèse de doctorat spécialité
philosophie non publiée, Université Paris 1, Paris.
Varela, F. J. (1996a). Quel savoir pour l'éthique ? Action, sagesse et cognition. Paris : La découverte.
Varela, F. J. (1996b). Invitation aux sciences cognitives. Editions du Seuil, pp.30.
Vico, G. (1744). Principes d’une science nouvelle, texte traduit par J.L. Lemoigne (1986). Ed Nagel, pp. 136.
56
3rd International Conference on Enactive Interfaces (Enactive /06) 57
Tactile-force-feedback integration as an exemplary case for the sense of touch
in VE. A new T-FFD device to explore spatial irregularities.

Jean-Loup Florens1, Armen Khatchatourov2, Charles Lenay2,


Annie Luciani3 & Gunnar Declerck2
1
ACROE, France
2
COSTECH, Technological University of Compiègne, France
3
ICA, INPG, France
jean-loup.florens@imag.fr

Context
The tactile-force-feedback (T-FFD) integration is pertinent for (at least) two situations in
which the role of tactile should be explored:
- Grip control in tasks of prehension
- Exploration of spatial irregularities
In each of these tasks, the complexity of interaction results in two major questions which should be
explored:
- The question of the role of deformability of the exploratory body creating dynamic constraints on
the user’s movement (deformation of the exploratory body in the model implemented and
information on this deformation brought to the user through the tactile stimulation).
- The question if an addition of tactile information can overcome the “one-point” characteristic of
FFD: if a spatially distributed information on the environment, combined to one-point force
interaction, can be approached to the spatial distribution of forces in real situation.
Thus, we have identified and have begun to explore 4 major cases in which the T-FFD
integration is important. In the following table, 2 rows and 2 columns define 4 cells and 4 major
cases to be explored. The experiments/devices numbered from 1 to 4 are positioned with regard to
these 4 cases of T-FFD integration.
The experiment already conducted at COSTECH: The role of tactile augmentation of a
1 PHANToM FFD studied on a task of goal-directed displacement”

TactErgos device: physically based modelling of interaction; deformable exploratory


2 body; synchronous functioning.
Pilot experiment: The role of tactile augmentation of a FFD interaction studied on a task of
goal-directed displacement.

3 4 Further work.

Proprieties of the model and the interface


Interaction with Deformable Spatial information
environment body
Exploration of spatial
irregularities 2 1

Grip Control
3 4

Tactile-force-feedback device: TactErgos


We have developed and will present a T-FFD device with following features:
- it is based on ERGOS force-feedback device, driven by TELLURIS synchronous real-time
processor (both developed by ACROE-ICA)
58
- the scene is modelled using physical modelling (CORDIS formalism), i.e. by masses connected to
each other by springs. This allows to easily model deformable objects and deformable exploratory
body.
- tactile stimulation is realised by Braille cells. Each of 16 pins is independently activated when the
force circulating between the one corresponding mass composing the exploratory body and the
object is above a certain threshold.
- the activation of pins is integrated in the force-feedback calculus loop allowing a synchronous
functioning at 3kHz (refreshment of stimulation is cadenced at the frequency of calculus of the
physical model). At each calculus step, the 16-pins data is serialised down to 4-bit and send to the
electronics of Braille cells through the parallel port interface of VME bus, piezoelectric stimulation
being limited to 500Hz.
- the resultant of forces of interaction between the masses of the body and the object constitutes a
material constraint to the displacement of the body, and is transmitted to the user through the grip of
FFD device.
The main novelty of the device is that the tactile stimulation is obtained strictly from the same
interaction loop, and obeys to the same physical formalism, as the FF. Thus, it provides both the
information on the spatial distribution of forces circulating between the object and the body
(activation of tactile pins); and also permit to implement the deformable body.

A pilot experiment
A pilot experiment is currently conducted and will be presented. It goes on a task of goal-
oriented displacement without vision: following the contours of a virtual rigid bridge to reach its
end.
A similar experiment has already been performed at COSTECH, using a
TactoPHANToM device (tactually augmented asynchronous PHANToM
device, no deformation of exploratory body). In this COSTECH’s
experiment the display of tactile information improves the contour
following of the resistant structure (there’s a substantial decrease of
number and time of contact losses, which means that the adhering to the
bridge is better): subjects are able to use the tactile information to
efficiently guide their displacements along the resistant surface.
This will allow a comparison between two devices, and elicit the role of deformation of
exploratory body.

Future work
Future work will be directed towards exploration of the grip control where both the
deformation of the exploratory body and the spatial information seems to be important.

References
Declerck, G., & Lenay, C. (2006). The role of tactile augmentation of a PHANToM FFD studied on a task of goal-
oriented displacement. An enactivist contribution to the study of modes of control of haptic interactive
displacement. 2nd Enactive Workshop – Montréal.
Jansson, G., Billberger, K., Petrie, H., Colwell, C., Kombrot, D., Fänger, J., König, H., Hardwick, A., & Furner, S.
(1999). Haptic virtual environment for blind people: exploratory experiments with two devices. International
journal of virtual reality, 4, 10-20.
Lederman, S. J., & Klatzky, R. L. (2004). Haptic identification of common objects: Effects of constraining the manual
exploration process. Perception & psychophysics, 66(4), 618-628.
Uhl, C., Florens, J. L., Luciani, A., & Cadoz, C. (1995). Hardware Architecture of a Real Time Simulator for the
Cordis-Anima System : Physical Models, Images, Gestures and Sounds. Proceedings of Computer Graphics
International '95 - Leeds (UK). Academic Press. - RA Ernshaw & JA Vince Ed. - pp 421-436.
3rd International Conference on Enactive Interfaces (Enactive /06) 59
Adaptive acquisition of enactive knowledge
Wai-Tat Fu
University of Illinois , USA
wfu_temp@yahoo.com/wfu@uiuc.edu

Most Human-Technology Interactions (HTI) involve a constant flow of information between


cognition and the environment. In this presentation, I will describe a rational-ecological approach to
study when and how perceptual-motor and memory strategies are dynamically deployed as the user
adapts to an interface. The major assumption of this approach is that memory and perceptual-motor
actions are executed adaptively in response to the cost and information structures of the
environment. In addition, cognitive resources are often adaptively allocated to exploit the statistical
structures of the environment. This general approach will be useful for understanding enactive
behavior and as general guidelines for designers of enactive interfaces.

Introduction
Traditionally, in the fields of cognitive psychology and artificial intelligence, interactive
behavior is modeled as problem-solving activities. The dominant information-processing approach
assumes a set of cognitive mechanisms that takes information obtained in the environment as input
and the selected actions as output of the mechanisms (e.g., Newell & Simon, 1972). One weakness
of the traditional approach is that the input (information from the environment) is often taken as a
given truth and the cognitive mechanisms are simply passively processing the input information. In
realities, however, the problem solver will often actively look for information in the environment,
thus influencing the input to the cognitive mechanisms (see Figure 1). For example, when a person
or a robot is navigating in a maze, the person or robot may want to check what is behind an obstacle
or remember the last turn before choosing to go in a particular direction. These perceptual-motor or
memory strategies are often actively executed to seek information in the environment that is useful
for selecting the right actions.

Information input Information input


Stimuli Stimuli

Information Processing Information Processing


Cognition Cognition

Information Output Information Output


Behavior Behavior
Disembodied Cognitive System Embodied Cognitive System:
Adaptive information Seeking

Figure 1. The different assumptions of how information flows in the traditional disembodied
and the proposed embodied cognitive systems.

The rational-ecological approach


One successful approach to characterize the information processing mechanisms is the rational
approach (e.g., Anderson, 1990). The major assumption is that cognition is well adapted to the long-
term statistical structures of the environment and are thus optimal for certain cognitive functions
that are critical to the organism. Traditionally, the rational approach is predominantly concerned
about the adaptive nature of the cognitive algorithms that process the passive input taken from the
environment. It is unclear whether the rational approach can be extended to explain enactive
behavior involving active information-seeking actions and memory strategies. I propose a rational-
ecological approach that takes into account the processes involved when a person interacts with an
60
environment. Through the interactions, information samples are actively collected from the
environment. I will show that the processes underlying these information-sampling activities are
indeed also adaptive, in the sense that they serve the purpose of adapting to the structures of the
environment to optimize performance.

The experiments
In this presentation, I will describe the results of two sets of experiments that demonstrate the
adaptiveness of the processes involved in the active acquisition of enactive knowledge. In the first
experiment, subjects copied a configuration of colored blocks presented on a computer screen to
another window (Ballard, Hayhoe, Pook, & Rao, 1997; Fu & Gray, 2000; Gray, Sims, Fu,
Schoelles, 2006). We manipulated the time costs associated with accessing the presented colored
blocks. We found that when the cost was high, subjects tended to encode more information in
memory to reduce the number of accesses to the colored blocks. When the cost was low, subjects
tended to encode less information in memory in the expense of increased number of accesses (more
perceptual motor actions) to the colored blocks. The results show that people tend to adaptively
balance the use of memory and perceptual-motor strategies in different environments, showing the
validity of the rational-ecological approach in explaining behavior. In the second experiment, I will
describe a map-navigation task in which subjects had to find the fastest path to go from a start point
to an end point (Fu & Gray, 2006). Subjects could actively sample information from different routes
to check their speeds. We manipulated the utilities of information obtained (how useful the speed
information was) and the costs of interaction (the time delays before the information was obtained)
to test whether subjects would adaptively adjust their information sampling strategies, and how the
strategies would influence their final performance (the speeds of the routes they found). We found
that although people were adaptive in their information sampling strategies, the information
samples obtained were often biased, and would lead them to stabilize at suboptimal level of
performance.

Final remarks
Traditionally, perceptual-motor and memory processes has been studied separately. I propose
an integrated view of perceptual-motor and memory processes. Results from our experiments show
that the interactions of these processes are highly adaptive to the long-term statistical properties of
the environment. I argue that this rational-ecological approach will be useful as a conceptual
framework for understanding embodied cognition and will provide useful guidelines for the design
of enactive interfaces.

References
Anderson, J. R. (1990). The adaptive character of thought. Hillsdale, NJ: Erlbaum.
Ballard, D., Hayhoe, M., Pook, P., & Rao, R. (1997). Deictic codes for the embodiment of cognition. Behavioral and
Brain Sciences, 20, 723-767.
Fu, W.-T., & Gray, W. D. (2000). Memory versus Perceptual-Motor Tradeoffs in a Blocks World Task. Proceedings of
the Twenty-second Annual Conference of the Cognitive Science Society (pp. 154-159). Hillsdale, NJ: Erlbaum.
Fu, W.-T., & Gray, W. D. (2006). Suboptimal tradeoffs in information seeking. Cognitive Psychology, 52, 195-242.
Gray, W. D., Sims, C. R., Fu, W.-T., & Schoelles, M. J. (2006). The soft constraints hypothesis: A rational analysis
approach to resource allocation for interactive behavior. Psychological Review, 113(3), 461-482.
Newell, A., & Simon, H. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.
3rd International Conference on Enactive Interfaces (Enactive /06) 61
Linking Perception and Action: A Task of Attention?
Nivedita Gangopadhyay
Institut Jean Nicod - EHESS, France
Nivedita.Gangopadhyay@ehess.fr

Of late, the major concern of an increasing number of theories of perception has been to link
perception and action in an effort to understand the former as it proceeds in its actual context of
everyday lived experience of an embodied perceiver situated in a dynamic environment. The insight
that perception involves “doing” something in terms of physical movement in space has found
expression in what I shall refer to as the “action-oriented theories” of perception (Gibson, 1966,
1979, Ballard, 1991, Varela et al., 1993, Milner & Goodale, 1995, Berthoz, 1997, O’Regan & Noë,
2001, Stoffregen & Bardy, 2001, Findlay & Gilchrist, 2003, Jacob & Jeannerod, 2003, Noë, 2004).
However, an analysis of the action-oriented theories reveals that they are not unanimous in their
understanding of the notion of action itself. To strengthen this newly emerging research paradigm
there is the need of specifying an underlying theory of action. In this paper I propose to address this
issue of a satisfactory notion of action. The questions I specifically seek to address are viz. In what
sense is the notion of action to be understood in order to firmly establish its links with perception?
Is action considered simply as motor movement executed in response to sensory stimuli sufficient to
adequately account for perception ?
I shall argue that perception as it proceeds in an embodied perceiver situated in a dynamic
environment is crucially a purposive and goal-directed phenomenon. From this I shall further argue
that if perception is essentially accepted as a goal-directed phenomenon, theories that seek to
ascribe a crucial role to action in explaining perception cannot subscribe to a purely “stimulus
triggered” account of action, i.e. cannot consider action as merely motor movement in response to
sensory stimuli. There is the need to consider what has been called “Ideomotor views” of action.
Ideomotor views of action in general suggest that the presence of goals or the representation of
goals is a primary causal determinant of action. However, the necessary treatment of actions in the
context of perception from an ideomotor perspective has an important consequence for action-
oriented theories of perception. The goal-directed nature of actions in the ideomotor framework
makes it necessary for these theories to acknowledge the existence of fundamental cognitive
mechanisms as determined by which motor movements can play a crucial role in perception. Thus
if an ideomotor view of action is necessary in order to satisfactorily explain perception in terms of
action, it implies that it is not sensory stimuli that is the primary determinant of motor movements.
It is rather cognitive mechanisms that play a crucial role in the generation of the motor movements
and the notion of action is to be understood in terms of these cognitive mechanisms.
I shall claim that a fundamental cognitive mechanism in this respect is the mechanism of
attention. Some authors have pointed out the importance of this mechanism in the context of
learning of motor skills that depend on perceptual guidance (Wulf & Prinz, 2001, Wulf et al., 1998).
But the importance of this mechanism in establishing a crucial connection between perception and
action has not been fully explicated. Hence situating this mechanism in a vital explanatory role in
the context of action-oriented theories of perception, I shall claim that attention is the first cognitive
mechanism which reveals the critical dominance of cognitive mechanisms over the motor
movements that play a crucial role in the generation of perception. In other words, it is the
cognitive mechanism of attention that primarily links action and perception. I further contend that
considering the notion of attention as “cognitive unison”, as has been recently proposed (Mole,
forthcoming) could prove to be of explanatory advantage for the action-oriented theories of
perception.

Testing the hypothesis: Proposal for an experimental protocol


To test the hypothesis that it is the cognitive mechanism of attention that primarily links
perception and action, I propose the following experimental protocol centering on sensorimotor
learning i.e. learning of skills.
62
The protocol is based on the use of sensorimotor cues in modulating attention in learning the
control of motor movements related to perception.
The experimental task is to teach subjects to play pool in a virtual set-up, involving a visual
display and a haptic device that acts as the cue in the set-up. The subjects selected must have no
prior experience of playing pool. They will be divided into two groups. The first group will be
taught in a context that instantiates the sensorimotor laws of a natural context including the auditory
effects of the motor movements executed by the subjects, such as the sound of the cue hitting the
balls, and the balls hitting each other or the sides of the table after being hit by the cue. The second
group will be taught in a context that instantiates the sensorimotor laws of a natural context but
without the auditory effects of the motor movements executed by the subjects. The following need to
be tested:
1. Are there differences in performance in the two groups in subsequent applications?
(Playing, interpreting a scene of the game, etc.)
2. Does the performance of the first group improve more quickly after trials?
3. Does this group perform better even in situations without auditory feed-back?
4. Is there a difference between the groups in reporting features of the virtual scene? Are there
evidences of better reporting by the first group indicative of greater “immersion” in the
context?
I propose that the presence of the auditory cue will extract greater attention from the subjects,
and will direct that attention more forcefully and for a greater length of time to the visible effects of
the motor movements executed, and this will improve performance. Better performance in this
context is indicative of a better established link between perception and action. The greater the
attention involved, the stronger will be the link between perception and action. The first group of
subjects are likely to be more attentive to the execution of the motor movements, the sensory
consequences of the movements, and the observed consequences in the world. This can be
interpreted as lending support to the hypothesis that attention is the first cognitive mechanism that
links together perception and action.

References
Ballard, D. H. (1991). Animate Vision. Artificial Intelligence, 48, 57-86.
Berthoz, A. (1997). Le Sens du Mouvement. Editions Odile Jacob. Paris.
Findlay, J. M., & Gilchrist, I. D. (2003). Active vision: The psychology of looking and seeing. Oxford University Press,
Oxford.
Gibson, J. J. (1966). The Senses considered as Perceptual Systems. Boston: Houghton Mifflin.
Gibson, J. J. (1979). The Ecological Approach to Visual Perception. Boston: Houghton Mifflin.
Hommel, B., Musseler, J., Aschersleben, G., & Prinz, W. (2001). The Theory of Event Coding: A Framework for
Perception and Action Planning. Behavioral and Brain Sciences, 24(5), 849-878.
Jacob, P., & Jeannerod, M. (2003). Ways of Seeing, the Scope and Limits of Visual Cognition. Oxford University Press,
Oxford.
Lenay, C. (2003). Prothèses perceptives et constitution spatiale du sens. Noir sur Blanc, 7, 7-12.
Milner, A. D., & Goodale, M. A. (1995). The Visual Brain in Action. Oxford University Press, Oxford.
Mole, C. (forthcoming). Attention is Cognitive Unison. Mind.
Noë, A. (2004). Action in Perception. The MIT Press. Cambridge, MA.
O’Regan, J. K., & Noë, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and Brain
Sciences 24(5).
Prinz, W., & Hommel, B. (2002). Common Mechanisms in Perception and Action. Oxford University Press, Oxford.
Stoffregen, T., & Bardy, B. G. (2001). On specification and the senses. Behavioral and Brain Sciences, 24, 195-261.
Varela, F. J., Thompson, E., & Rosch, E. (1993). The Embodied Mind. The MIT Press, Cambridge, MA.
Wulf, G., Höß, M., & Prinz, W. (1998). Instructions for motor learning: Differential effects of internal versus external
focus of attention. Journal of Motor Behavior, 30, 169-179.
Wulf, G., & Prinz, W. (2001). Directing attention to movement effects enhances learning: A review. Psychonomic
Bulletin & Review, 8(4), 648-660.
3rd International Conference on Enactive Interfaces (Enactive /06) 63
Dyadic postural activity as a function of support surface rigidity

Marc Russell Giveans1, Kevin Shockley2 & Thomas Stoffregen1


1
Human Factors Research Laboratory, University of Minnesota, USA
2
University of Cincinnati, Cincinnati, USA
givea017@umn.edu

Introduction
Shockley et al. (2003) showed, using nonlinear measures from cross recurrence quantification
(CRQ) analysis, that when two people converse with each other, there is an increase in shared
activity in standing posture, relative to when each person converses with another party. In
individuals, the functional integration of posture and supra-postural activity is known to be robust
across variations in the rigidity of the support surface (Smart et al., 2004). In this study, we asked
whether the sharing of postural activity within dyads also would be robust across variations in
support surface rigidity.

Method
Twelve pairs of undergraduate students from the University of Minnesota participated for
extra credit. All had normal or corrected-to-normal vision. Sixteen pairs of cartoon pictures were
presented at eye-level (3 feet away) to each pair. Each person was able to see only their picture. A
magnetic motion capture sensor attached at the participant’s waist was used to measure postural
sway (sampling rate 60 Hz). There were 16 trials total, each lasting 2 minutes. Each pair was given
a 1 minute practice session with their partner. They were instructed to find as many differences
between the two pictures as they could within a 2 minute time frame. This was to be done by
conversing with their partner, describing each one’s picture to the other. Each person was asked to
keep their feet flat on the surface they were standing on, and not to pick up their feet or take a step.
Each pair of participants was positioned back-to-back, approx. 3 feet from each other with the
magnetic emitter between them. We used a 2 x 2 design (floor vs mattress, and intra- vs interpair
interaction). On half of the trials the pair stood on the laboratory floor, while on the other half each
participant stood on a small mattress. Half of the trials were done with members of each pair talking
to each other, while the other half was done with each person talking to an experimenter (E)
standing off to their side, out of direct view. The four experimental conditions were randomized in
4-trial blocks.

Results
Following Shockley et al. (2003), the dependent measures were percent recurrence (the
amount of shared postural activity) and maxline (longest parallel trajectory).

Percent Recurrence. There was a main effect of Support Surface, F(1,11) = 18.84, p < .005.
There was greater shared activity when participants were standing on the floor as compared to when
they were standing on the mattresses. There was no main effect of Task Partner, F(1,11)= 1.72, p >
.05. The interaction between Support Surface and Task Partner did not reach significance, F(1,11)=
4.43, p > .05.

Maxline. There was a significant main effect of Support Surface, F(1,11)= 13.36, p < .005. The
coordinated postural motion was more stable when participants were standing on the floor as
compared to when they were standing on the mattresses. There was no significant effect of Task
Partner, F(1,11) = 1.49, p > .05 and no significant interaction, F(1,11) = 2.43, p > .05.
64

Figure 1. Interactions between support surfaces and task partner for %Recurrence and
MAXLINE.

Discussion
The results showed that shared postural activity during dyadic conversation was reduced
during stance on a non-rigid surface. Thus, the effects observed by Smart et al. (2004) in the context
of individual stance (in which functional integration of body sway with the performance of
concurrent supra-postural visual tasks) were not replicated in the context of dyadic conversation. It
is possible that dyadic coupling differs in some basic way from the integration of postural and
supra-postural activities in individuals. An alternative interpretation is that relations between
postural and supra-postural activity may differ for supra-postural activities that are controlled
relative to the illuminated and reverberative environments. Distinguishing between these
interpretations will require further research.

References
Shockley, K., Santana, M., & Fowler, C. (2003). Mutual interpersonal postural constraints are involved in cooperative
conversation. Journal of Experimental Psychology, 29(2), 326-332.
Smart, L. J., Mobley, B. S., Otten, E. W., Smith, D. L., & Amin, M. R. (2004). Not just standing there: the use of
postural coordination to aid visual tasks. Human Movement Science, 22, 769-780.
Stoffregen, T. A., Smart, L. J., Bardy, B. G., & Pagulayan, R. J. (1999). Postural stabilization of looking. Journal of
Experimental Psychology: Human Perception and Performance, 25, 1641-1658.
3rd International Conference on Enactive Interfaces (Enactive /06) 65
Anthropology of perception: sand drawings, body paintings, hand signs and
ritual enaction among Indigenous Australians

Barbara Glowczewski
Laboratoire d'Anthropologie Sociale, Collège de France, France
b.glowczewski@college-de-france.fr

Next to words and images, some interactions in ritual performances and various Aboriginal
media of expression (dancing, singing, body painting or sand drawing) relate not only to symbols
and icons, but also to a direct, intuitive and motor perception.
Varela's work is challenging for anthropologists because it allows to try to theorise what in
different cultures could relate to such enactive knowledge. Among the Warlpiri from the Central
Australian desert for instance it is assumed that "articulation" of speech and body in motion relate to
the same agency: they call it kurruwalpa. Anthropologists translated this word by "spirit-child", a
common concept to most of the 200 Indigenous languages of Australia. My hypothesis is that the
"spirit-child" concept has been misunderstood by generations of scholars, simply because they were
caught in a Christian and philosophical paradigm that separates the body and the mind.
Anthropologists translated by "sprit-child" the Indigenous idea that sexual relation is not enough to
produce a child: indeed the Aborigines say that it is also necessary to have an agent (both spiritual
and material) coming from the memory of the earth, and a given place to allow the child to
"embody" a certain shape which can be "articulated" both through speech and body motion. If we
consider some recent neuro observations we can related this to the fact that some brain deterioration
of the cells simultaneously seem to perturbate the balance of the walking body and the ability to
focus in speech.
We have certainly a lot still to discover from various Indigenous knowledges which are
hidden by the difficulty to translates these cultural insights which we have not fully yet experienced
in our scientific discourse. For instance I witnessed once during the Festival of sciences organised
in Chamonix on memory, how an old gentleman who had a brain damage preventing him from
talking since three years, suddenly to the surprise of his daughter, started to sing in an Aboriginal
language in tune with the Australian singer 5wayne Jowandi Barker) who was singing on the stage,
his face blown on a video screen. Was he following the movements of the lips on the screen, or was
he stimulated by the vibrations of the electro type sound of the didjeridu played by the singer just
before? Many Aboriginal and non Aboriginal people fascinated by this instrument say that it has
healing properties. I would like to think that we simply have not yet measured the power of
connections that interact between all the senses, the muscles and all vibrations that circulate through
space when one navigates in it.
The idea in the anthropology of perception here proposed is to reveal a plural process of
"existentialisation" which participates in the enactive complexity which characterises human
interactions multimodal interfaces. In the Australian Indigenous case these interfaces unfold an
ancestral technology of links connecting humans with everything that constitutes the actual and
virtual environment. Ritual is a tool for enaction and production of intersubjectivity, not only
through performance but also through interpretation and transposition of dreams which among
Aboriginal people can stimulate new "mises en scène", staged in songs and dances. Some young
artists who grew up in cities and do not have a traditional cultural background, also say that the
process of dance stimulated by traditional music make them sometimes discover an ancient body
"language" or rather a score of gestures (gestuelle in French does not necessarily means it is a
language that can be reduced to system of signs: in fact perception of dance I believe has an extra
element of "impression" which can only be expressed in dancing as a performance and not be
reduced to coded signs...): these gestures seem to be burried in the dancers and they are revealed
like a film negative on a photo paper, actually in the moving body. The question is where does this
memory come from? Is it in the body or in the mind, or both, learnt or inherited?
Audio-visual exploration (and enactive interface technologies) can enhance the cultural
66
foundations of the reticular way many Indigenous people in Australia map their knowledge and
experience of the world in a geographical and virtual web of narratives, images and performances.
Non-linear or reticular thinking mostly stresses the fact that there is no centrality to the whole but a
multipolar view from each recomposed network within each singularity, a person, a place, a story,
allowing the emergence of meanings and performances, encounters, creations as new original
autonomous flows. Reticular or network thinking, I argue, is a very ancient Indigenous practice but
it gains today a striking actuality thanks to the fact that our so called scientific perception of
cognition, virtuality, and social performance has changed through the use of new technologies. For
me reticular perception is more then a structuring methaphor as defined by Lakoff and Johnson
twenty years ago. The process of interaction involved between people, objects and environment is
not just about representation of the spoken language but about sensory connections, including space
perception defined as the 6th sense.
To feel places through travelling is at the heart of traditional Aboriginal philosophy. This
philosophy is expressed in many different ways by the various Australian indigenous groups -
hundreds of languages were spoken on the continent before the arrival of the Europeans settlers two
centuries ago. One of the medium which allows us to understand some of the Indigenous
philosophies is to look at what the Western social sciences have called mythology. Mythology
encompasses ancestral stories which have the value of foundational statements questioning the
place of man and women in the world, tensions of human fate such as gender, autonomy and
dependency, attachment and separation, alliance and conflict, love and anger, life and death. Myths
often project such existential questions on a cosmological stage where the actions of hybrid actors -
anthropomorphic, animal, vegetal and supernatural - explain not only social rules but also the
coming into being of the earth, the stars and all existing fauna, flora or other phenomena.
Aboriginal people call such stories Bugarrigarra on the West coast, Jukurrpa in the desert or
Wangarr in Arnhem Land. They translate in English these Indigenous words by « Dreamings », a
very complex concept which understanding varies according to different local systems of
knowledge. Nevertheless, in all cases, the Dreaming stories are related to at least one place and
more commonly to a series of places sang in a songline as part of related rituals. The places unfold
like an itinerary, an invisible route, which is marked by the natural features where Dreaming events
took place. Such routes can connect hundreds of places over hundreds of kilometres. They are the
virtual pathways materialising the travel of the ancestors from place to place.
All things named in nature and culture can have a Dreaming, that is a story connecting places.
These places which look to us as natural features - rocks, waterholes or creeks - are cultural sites for
Aboriginal people, because almost all features of the landscape are considered to be an "imprint" or
a materialisation of the ancestral travellers. These numerous travellers are at the same time
ancestors of some humans and of the animals, plants, rain or fire, who gave the Law and the culture
to the relevant people. All the ancestral Law makers have their names (common nouns like
Kangaroo or Rain, or proper names without a common meaning like the Djangawul or Wawilak
Sisters). The mythical heroes are also all called today in English "Dreamings" : the Dreamings are
said to be eternal and still « dreaming » in those places which are revered as sacred sites, (such as
the famous Uluru, Ayers Rock in Central Australia). Thousands of Dreaming stories and routes
crisscross the whole of Australia. The routes are defined by series of place names, which usually are
not common nouns but toponyms forming a complex web connecting groups according to shared
songlines which connect these places from North to South, West to East or vice versa.
There is many ways to tell stories, to develop different angles, interpretations, and
connections, according to one's own style but also experience. Even new episodes can be added:
Warlpiri people for instance say that they can communicate with the eternal ancestors who sleep in
the sacred sites when their spirit travels in their own dreams - especially when they sleep in these
sites. All the sleepers of one camp are often asked to share their dreams as their collective
experience is considered as « the same dream ». When it is recognised through the interpretation of
a dream that the dreamers travelled in the space-time of a given Dreaming, the dreamers' vision can
be materialised through a new song, a painted design or a dance. Such dream creativity is seen as
3rd International Conference on Enactive Interfaces (Enactive /06) 67
information given by the Dreaming virtual memory, even though it can be a re-adaption of a recent
event that affected the living. From the Dreaming point of view it is « actualised » by Jukurrpa, the
virtual matrix and its ancestral inhabitants. The Dreaming is not a golden age or an eternal
repetition of something that would be without past or history. Just like the evaluation of space in the
desert geography is relative to the speed with which you can travel, the perception of time is relative
to the way you treat an event : sometimes it is to be forgotten, temporary avoided because of a death
or a conflict, other times it is to be rememorized and transformed to be projected in the future and
set as an example.
A multimedia example will be used to support the audiovisuel evidence : the CD-Rom Dream
trackers. Yapa art and knowledge of the Australian desert. (Unesco Publishings 2000) Barbara
Glowczewski developed with 50 Warlpiri artists based on audiovisual data she collected in
Lajamanu since 1979. The aim was to follow the Indigenous mapping of knowledge, which projects
information into places and geographical links between these places as a way to construct a mental
map to understand the process of transmission of cultural knowledge.
68
3rd International Conference on Enactive Interfaces (Enactive /06) 69
Why perspective viewing of electronic documents should be allowed
in the multi-purpose graphical user interface

Yves Guiard1, Yangzhou Du2, Olivier Chapuis2 & Michel Beaudouin-Lafon2


1
Mouvement et Perception, CNRS & University of the Mediterranean, France
2
Laboratoire de Recherche en Informatique & INRIA Futurs,
CNRS & University of Paris XI, France
yves.guiard@univmed.fr

We normally see most of the world's surfaces obliquely, hence in perspective. Physical
documents, based on the artefact of the sheet, are an exception—sheets of paper are deliberately
manufactured with a small enough size that we can manipulate them and inspect them with a
roughly perpendicular viewing angle. But the case is quite different with electronic documents,
which are typically very large surfaces. So long as electronic documents are displayed as planar
surfaces (actually a nearly universal option in the state of the art design, and arguably the best
option), they will need to be navigated, rather than manipulated. Hence the question, why aren't
users allowed to see their documents in perspective while navigating them?

It is useful to view the multi-function graphical user interface (GUI) with which computer
users navigate a broad diversity of electronic documents in a broad diversity of applications as a
flight simulator. An electronic document is a special kind of landscape, being contents-editable and
rigorously flat, yet it constitutes the environment of a virtual locomotion. Like any flight simulator,
the GUI involves a computer graphics camera that the user can translate in x and y (latitude and
longitude variations, i.e., scroll), and z (altitude variations, i.e., zoom).

But this one flight simulator (the most important of all, if only because it exists in millions of
copies) suffers from a problem of his own: unlike normal flight simulators, it involves a camera
whose orientation is perpendicular (rather than oblique or parallel) to the visualised surface. We
will argue that this viewing orientation is problematic for all document navigation operations. For
one thing the 90° viewing angle precludes perspective views, which happen to be an efficient, and
quite natural solution to the context-plus-focus problem which is central to research on information
visualisation (Furnas, 2006). Another concern has to do with the optical flow fields that arise during
70
active navigation (Gibson, 1979): the 90° viewing angle unjustifiably deprives users of the
prospective information they need to control their navigation.

We will report some experimental data showing that


• for target-directed navigation (Fitts' paradigm), perspective navigation surpasses the
traditional fixed-perpendicular zooming technique but only up to a certain level of target
difficulty, due to scale implosion with increasing observation distance (Guiard et al., 2006;
Du et al., in press), and
• the facilitation of navigation allowed by one degree of freedom of camera tilt is particularly
impressive for more realistic tasks in which the people must not only reach a certain goal but
also gain information from the document space they are to traverse (Guiard et al., in
preparation).

We will demonstrate a prototype of an interface allowing perspective navigation, available online at


http://www.lri.fr/~du/pvi/.

We will conclude with a simple practical recommendation. Let us provide users of the standard GUI
with one extra degree of freedom (pitch) to control their virtual camera.

References
Du, Y., Guiard, Y., Chapuis, O., & Beaudouin-Lafon, M. (2006). Assisting Target Acquisition in Perspective View.
Proceedings of the British Computer Society Conference on Human Computer Interaction (HCI’06). London
(UK), Sept. 2006. ACM Press.
Furnas, G. W. (2006). A Fisheye Follow up: Further reflections on focus + context. Proceedings of Conference on
Human factors in computing systems (CHI’06).
Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin.
Guiard, Y., Chapuis, O., Du, Y., & Beaudouin-Lafon, M. (2006). Allowing Camera Tilts for Document Navigation in
the Standard GUI: A Discussion and an Experiment. Proceedings of Advanced Visual Interfaces (AVI), pp. 241-
244. Venice (Italy), May 2006. ACM Press.
Guiard, Y., Du, Y., & Chapuis, O. (in press). The benefit of perspective visualisation for document navigation varies
with the degree of goal directedness.
3rd International Conference on Enactive Interfaces (Enactive /06) 71
Computer/human structural coupling for data interpretation
Thomas Guyet1, Catherine Garbay2 & Michel Dojat3
1
TIMC/IMAG Laboratory, France
2
CLIPS/IMAG Laboratory, France
3
UMR594 UJF/INSERM, France
thomas.guyet@imag.fr

We work on the design of computerized systems that support experts during their complex
and poorly formalized data interpretation process. We consider interpretation as the process by
which high level abstracted information is attached to data. We assume that a computer could
efficiently helps an expert in such a process via a structural coupling (Maturana and Varela, 1994)
based on their interactions by the means of specific annotations. Enaction appears as a stimulating
source of inspiration for the design of such systems. Our approach is applied to the interpretation of
physiological time series acquired from patients in intensive care unit (ICU).

Time series interpretation for ICU monitoring systems


Nowadays in ICU, monitoring systems have high level of false alarm rates (Tsien et al., 2000)
essentially because they rely on simple threshold-based methods and do not take into account of the
richness of the information contained in physiological parameters records. A patient record is a
long, high frequency, multivariate time series data. Classical monitoring improvement techniques
are focused on algorithms at a numerical level (Tsien et al., 2000; Calvelo et al., 2000). On the
contrary, we propose that symbolic abstraction could bring useful information to data, providing a
better understanding of these still poorly formalized data.
A multivariate time series is the trace of the physiological patient time course. Two types of
information could be discovered: 1) events that could occur and 2) relations (causal or temporal)
between events that highlight the patient state evolution. Then, two sorts of annotations are
introduced: symbols and relations between symbols. A symbol is an abstraction of the data-level
signature of an event. For time series data, a signature is a fuzzy pattern. When a segment matches
such a fuzzy pattern, it is associated with the corresponding symbol. The interpretation of the time
series data should reveal what the important events are and then build the corresponding signatures.
With this information numerical time series are translated into symbolic time series. A relation
between symbols is a trace of the event’s dynamic. A relation brings contextual information to
symbols that are implicated in it. A scenario will be a relevant, i.e. frequently or rarely observed in
patients’ states evolutions, complex set of relations between several symbols.
Clinicians have difficulties to define the signatures of relevant clinical events and the various
significant patterns attached to them. Moreover, the identification of pertinent relationship between
them reveals to be a complex combinatorial task. The aim of our work is to assist clinicians, using
computer, in their interpretation of physiological data to discover the useful information - events,
signatures and scenarios - which can be eventually introduced in a new generation of monitoring
systems.

Human & Machine structural coupling: An enactive approach


We advocate for a structural coupling between a clinician and a machine to benefit from their
complementarities. A clinician is able to take into account of a large context of a specific patient’s
data record and has a global view on the data. A machine is able to treat a large amount of patients’
cases and performs a numerical analysis on the data. Our goal is to enable this structural coupling
based on enaction theory. The clinician and the machine are both considered as agents that could
perceive and modify their environment, made of time series data and annotations, and have mutual
interactions.
Varela et al. (1993) define an enactive system as a system that builds the world while it is built
by it. We transpose this definition to the case of a world of time series data. An agent modifies its
72
interpretation models along the interpretation process. Enaction theory leads us to consider two
design constraints: 1) each agent builds its own models independently of the other, and 2)
interactions are possible through the shared annotations and data. In the structural coupling
paradigm, interactions guide agents to congruent models. Our approach shares some similarities
with the talking heads of Steels (2003) that interact about the shared perception of geometrics
figures and build abstract models in the form of a shared lexicon in an emergent way.
Course-of-action centered design (Theureau, 2003) proposes to implicate users in a system’s
construction before its exploitation phase. We propose here to design systems that evolve during
their utilization based on the agent experience, e.g. past processing and past interactions with the
others agents.

System architecture
Figure 1 shows the system architecture. Three successive steps are considered, which consists
in extracting meaningful segments of data (Segmentation step), transforming them into symbolic
signatures (classification step) and finally associating them to form temporal scenarios (learning
step). Each step is performed in a bottom-up way, in order that new information emerges based on
lower-level processing. In parallel, lower-level information is revised considering new upper-level
information (feedback). It ensures the global knowledge consistency. At each step, the clinician is
included in a two ways circular information flow that represents mutual man-machine interaction
through the data. Interactions involve pointing out relevant information and annotation mechanisms.
We have developed algorithms for each step and the complete system is currently under
implementation. The symbolic time series transformation is under evaluation.

Figure 1. Interactive time series interpretation system.

References
Calvelo, D., Chambrin, M., Pomorski, D., & Ravaux, P. (2000). Towards Symbolization Using Data-Driven Extraction
of Local Trends for ICU Monitoring. Artificial Intelligence in Medicine, 19, 203-223.
Maturana, H., & Varela, F. (1994). L’arbre de la connaissance. Paris, Addison-Wesley.
Steels, L. (2003). Evolving grounded communication for robots. Trends in Cognitive Science, 7, 308-312.
Theureau, J. (2003). Course-of-action analysis and course-of-action centered design. In: E. Hollnagel (Eds). Handbook
of cognitive task design (pp 55-81). Lawrence Erlbaum Ass., Mahwah, New Jersey.
Tsien, C. L., Kohane, I. S., & McIntosh, N. (2000). Multiple Signal Integration by Decision Tree Induction to Detect
Artifacts in the Neonatal Intensive Care Unit. Artificial Intelligence in Medicine, 19,189-202.
Varela, F., Thomson, E., & Rosch, E. (1993). L'inscription corporelle de l'esprit. Paris, Seuil.
3rd International Conference on Enactive Interfaces (Enactive /06) 73
Action for perception : influence of handedness
in visuo-auditory sensory substitution

Sylvain Hanneton & Claudia Munoz


Laboratoire de Neurophysique et Physiologie
CNRS UMR 8119 and University René Descartes, France
sylvain.hanneton@univ-paris5.fr

In this preliminary study we address the question of the influence of handedness on the
localization of targets perceived through a visuo-auditory substitution device. Participants hold the
device in one hand in order to explore the environment and to perceive the target. They point to the
estimated location of the target with the other hand. This location can be considered as an enactive
knowledge since it is gained through perception-action interactions with the environment. The
handedness can influence the accuracy of the pointing : has the device to be hold in the right or left
hand? There are two possible main results. (1) Participants are more accurate with the device in the
left hand because pointing movements are more skillful with the dominant hand. (2) Participants are
more accurate with the device in the right hand because exploratory movements (perceptive
movements) are more precisely controlled with the right hand. Enaction theory assumes that action
for perception is crucial to establish an enactive knowledge. According to this theory, the dominant
hand has to be used for a fine control of perceptive movements rather than for pointing movements.
Consequently we expect to obtain the second result. In an other context, right handed rifle shooters
with a dominant left eye were shown to be more accurate if they hold the rifle in the left arm (Porac
& Coren, 1981).

Figure 1. Experimental setup and the imposed hand grasping posture for holding the webcam.

Methods
The VIBE device converts a video stream to a stereophonic sound stream. It requires only a
standard webcam and a standard computer (see Auvray et al. 2005 for details). Results presented
here concerned three young right-handed females. Participants were instructed to point to targets
perceived and memorized either visually (“vision” experimental condition, VEC) or via the VIBE
device (“prosthesis” condition, PEC). In the VEC condition, subjects were asked to observe the
target during three seconds, to close the eyes and to point immediately to the estimated position of
the target with either the left or right index. In the PEC condition, participants were blindfolded,
wore closed headphones, and held the webcam in the right or left hand with an imposed grasping
posture (figure 1). The elbow of the arm that hold the webcam had to keep at a specific location on
the table. Participants had 15 seconds to explore the environment and then pointed to the estimated
target location. Each participant did 45 trials (3 target positions x 15 repetitions) in four
experimental conditions (VEC or PEC and pointing with the left or the right hand). We studied the
influence of experimental conditions on distance to target, on constant and variables pointing errors
and on confidence ellipse for each target and each experimental condition. Considering confidence
ellipses (a linear local estimates of errors distributions) can give an access to perception distortion
74
induced by “defaults” of perceptive organs, memory or actuators and to the structure of the spatial
representation of targets (McIntyre et al., 1998).

Results
The mean distance to target is obviously lower in the VEC condition than in the PEC
condition (table 1). In the PEC condition, the mean distance to target is lower when pointing with
the left hand (webcam in the right hand) than in the opposite condition. You can also notice that the
variance of the distance to target is systematically lower in this condition. Characteristics of the
confidence ellipses are very variable across subjects, targets and experimental conditions. However
we have to emphasize that in the PEC condition, participants were more precise in seven cases over
nine (3 targets x 3 participants) when holding the webcam in the right hand. The “inversion” of the
handedness is particularly clear for the participant #2 (figure 2).

Table 1. Means and standard deviations for the different error measures.
Condition VEC (vision) Condition PEC (prosthesis)

Pointing with → Right hand Left hand Right hand Left hand
Distance to target (cm) 3.87 ± 1.81 3.99 ± 2.17 6.48 ± 5.00 6.06 ± 3.1
Absolute distance error (cm) 1.86 ± 1.41 2.41 ± 1.78 3.83 ± 3.88 4.06 ± 2.7
Absolute direction error (deg) 3.51 ± 2.25 3.25 ± 2.35 5.07 ± 4.47 4.22 ± 3.1

Figure 2. Pointing confidence ellipses obtained for the second participant when pointing either
to visually remembered targets (VEC, left) or to targets perceived trough the sensory
substitution device (PEC, right). Bold ellipses : pointing with the right hand. Solid ellipses :
pointing with the left hand.

Discussion
This preliminary results support our hypothesis that pointing is more accurate when the device
is held in the right dominant hand. Dexterity has to be attributed to the active part of the perceptive
system. This study has obviously to be completed but it shows how the concept of enaction is
important and how it can be experimentaly addressed in the field of sensory substitution.

References
Auvray, M., Hanneton, S., Lenay, C., & O’Regan, K. (2005). There is something out there: distal attribution in sensory
substitution, twenty years later. Journal of Integrative Neuroscience, 4(4), 505-21.
Azemar, G. (2003). L’homme asymétrique : gauchers et droitiers face à face. CNRS Editions, Paris.
McIntyre, J., Stratta, F., & Lacquaniti, F. (1998). Short-term memory for reaching to visual targets: psychophysical
evidence for body-centered reference frame. The Journal of Neuroscience, 16(20), 8423-8435.
Porac, C., & Coren, S. (1981). Lateral preferences and human behaviour. New York: Springer Verlag.
3rd International Conference on Enactive Interfaces (Enactive /06) 75
A virtual haptic-audio line drawing program

Charlotte Magnusson, Kirsten Rassmus-Gröhn & Håkan Eftring


Certec, Design Science, Lund University, Sweden
charlotte.magnusson@certec.lth.se

A virtual haptic-audio drawing program prototype designed for visually impaired children, has
been gradually developed in a design-evaluation loop involving users in five stages. Four qualitative
evaluations focused on recognizing drawn shapes and creating drawings have been conducted
together with a reference group of 5 visually impaired children. Additionally, one formal pilot test
involving 11 adult sighted users investigated the use of a combination of haptic and sound field
feedback. The program currently allows standard drawing operations such as free-hand drawing,
drawing of simple geometric figures, copy/cut/paste/delete, change size and moving curves. Results
indicate a subjective preference as well as a shorter examination time for negative relief over
positive relief for the interpretation of simple shapes such as 2D geometrical figures. The presence
of the position sound field with a pitch and stereo panning analogy was not shown to affect task
completion times. The further addition of sound icons to identify objects and provide feedback on
actions performed was finally seen to significantly enhance the experience of using the program.

Introduction
Getting access to 2D graphics is still a large problem for users that are severely visually
impaired. Using a haptic display in combination with audio feedback is one way to enable access.
There are many issues to address, e.g. how to provide an overview, to what extent users are able to
interpret a combination of lines or line segments into a complex image, how to design the lines to
get appropriate haptic feedback, what hardware to use etc.
There are few tools that enable blind people to create computer graphics. As described in
Kennedy (1993) and (Salzhauer & Sobol, 2003) there are indeed people who are blind who have an
interest in hand drawing. In Kamel (2003), a CAD application is presented that enables users to
create drawings with the help of audio and keyboard. This is accomplished by a structured approach
of dividing a drawing into small parts and to enable the user to draw small segments of a drawing.
In Hansson (2004), a study on a haptic drawing and painting program is presented, and that work
can be seen as a pre-study to the work in this article.

The haptic-audio drawing program


The presented prototype makes it possible to make black & white relief drawings and tries to
incorporate improvements suggested by Hansson (2004). The Reachin 4 beta software is used to
control the haptic device, which can be either a PHANToM OMNI or a PHANToM Premium. The
sound control is based on the FMod API.
The application consists of a room with a virtual paper sheet, which a user can draw a positive
or negative relief on. The virtual paper is inscribed in a limiting box. When the PHANToM pen is in
touch with the virtual paper the user can draw on it while pressing the PHANToM switch. The relief
height (depth) is 4 mm.
Different types of sound feedback has been added to the program. One type is a localization
feedback sound field. When the cursor moves in the virtual room, the pitch of a position tone is
changed, brighter upwards, and mellower downwards. The mode information is conveyed by the
volume and timbre of the tone. In free space, a pure sine wave is used. When the user is in contact
with the virtual drawing paper (not pressing the PHANToM switch) the volume is louder. And
when the user is drawing (and thus pressing the PHANToM switch) the tone is changed to a saw-
tooth wave. Also, to differentiate the walls of the limiting box from the virtual paper a contact
sound is played when the user hits a wall with the PHANToM pen. Another type of sound feedback
added is a set of sound icons to identify objects (curve, rectangle, circle or straight line) and to
76
provide feedback for the different drawing operations such as copy/cut/paste/delete, size changes
and curve movements.

Results and Conclusion


All users seem to enjoy the program, and have found it easy both to draw and feel the lines.
For line following negative relief seems to be preferred, although one user expressed a liking for
positive relief. Both vertical and horizontal work areas have been tested. The horizontal work area
puts less strain on the arm, and allows for external physical constraints (e.g. the table) to stop the
user from pushing through the haptic surface too much. The vertical work area on the other hand
seems to generate more distinct haptic feedback – uses express that shape differences may be easier
to feel with this orientation.
For the tests with both haptics and audio, all test users were able to use the application as
intended, and the different task results for the users seem to match personal differences in fine
motor skills and the ability to remember images. During the test session some general observations
were also made. It seemed as if some of the users were helped by the difference in sound character
to know which program mode they were in. The sounds, however artificial, did not disturb the users
very much, although one musically trained user found them disturbing. That same user also
indicated the similarity of the sound with the aiming sound used for target shooting for blind users.
Another user expressed great enjoyment with the sound and spent quite much time playing with it.
At a later test all users were able to use the program to draw a schematic house, and the sound icons
were commented on as enhancing the drawing experience (expressions such as “this is so cool!”,
“great fun!” were used).
In contrast the formal tests did not show any effect on task completion times for the
localization audio feedback. This test did show shorter examination time for negative relief over
positive relief for the interpretation of simple shapes. These results are in agreement with other
similar studies (Yu, Brewster, Ramloll & Ridel, 2001; Sjöström, Danielsson, Magnusson &
Rassmus-Gröhn, 2003; Riedel & Burton, 2002; Pakkanen & Raisamo, 2004).
Finally it must be pointed out that the design of a drawing program based on haptic and audio
feedback puts particular focus on the role of gestures in the interaction. Visually based metaphors
like click and drag needs to be replaced by more gesture based interaction techniques.

References
Hansson, C. (2003). Haptic Drawing Program. Master Design Sciences, Faculty of Engineering, Lund University.
Kamel, H. M. (2003). The Integrated Communication 2 Draw (IC2D). Ph D Electrical Engineering and Computer
Sciences Department, University of California.
Kennedy, J. M. (1993). Drawing and the Blind (Pictures to Touch). New Haven, London: Yale University Press.
Pakkanen, T., & Raisamo, R. (2005). Perception strategies in modal-redistributed interaction. Computer Architecture,
Proceedings 31st Annual International Symposium on, pp. 641-644.
Riedel, B., & Burton, A. M. (2002). Perception of Gradient in Haptic Graphs by Sighted and Visually Impaired
Subjects. 2002. Proceedings of Eurohaptics, Edinburgh University, pp 134-137.
Salzhauer, E., & Sobol, N. (2003). Art Beyond Sight Art Education for the Blind. Inc. and AFB Press.
Sjöström, C., Danielsson, H., Magnusson, C., & Rassmus-Gröhn, K. (2003). Phantom-based haptic line graphics for
blind persons. Visual Impairment Research, 5(1), 13-32.
Yu, W., Brewster, S., Ramloll, R., & Ridel, B. (2001). Exploring computer-generated line graphs through virtual touch.
Signal Processing and its Applications, Sixth International, Symposium on, vol. 1, pp. 72-75.
3rd International Conference on Enactive Interfaces (Enactive /06) 77
Enaction approached by Tao and physiology
Nancy Midol & Weiguo Hu
Laboratory of Anthropology, University of Nice Sophia Antipolis, France
Researches Department of acupuncture, TCM Institute of Beijing, China
midol@unice.fr

How do we know what we know? What knowledge organizes our lifestyle? Knowledge,
therefore, is distinguished from beliefs by its justification, and much of epistemology is concerned
with how true beliefs might be properly justified.
If we compare Tao’s knowledge with Western Scientific Knowledge of environmental
interactive perception–action, we may discover sometimes a similar knowledge, but always an
opposite methodological way to think about it.
The traditional Chinese thought is one oldest in a most complex civilization in the world. It
mainly refers to three philosophical roots which are Taoism, Buddhism, Confucianism. Tao and
Buddhism are “philosophy in act” generally considered as ways of wisdom, for discovering the
Nature of Reality through years of spiritual cultivation, meditation, non-action, and so on. This
transformational discovery is called "Awakening" or "Enlightenment".

On the opposite, Western modern’s sciences emerged from Philosophy two centuries ago in
Europe. In the broadest sense, science refers to any knowledge or trained skill, which are attained
by correlated liable. The word science refers to a system of acquired knowledge based upon
empiricism and experimentation. Scientists deal with phenomena, measurements, correlations,
causal mechanisms, modeling, each of which has a distinctive epistemic status.

Though Buddhism and Taoism have been available for more than 2500 years, the scientific
community did not understand their thought, until the middle of the XX century. But some famous
scientists investigated Zen and Tao practices. There after, they initiated new dialogues with some
Buddhists or other spiritual guides. They agreed with the idea of changing present scientific
paradigms for a new one, holistically based on, which has to mark a New Vision of Reality as Fritjof
Capra (1988) said. The formulation of a system of nature - both Inner and Outer - is an inter-
disciplinary development project. In the direction of increasing complexity, it includes self-
awareness, conscious experience, conceptual thought, symbolic language, dreams, art…Ramon y
Cajal alluded to the “mysteries of the universe as a metaphor reflecting the mysteries of the brain
(Austin, 2005, 23).

The biologist, Francisco Varela, who practiced Tibetan's Buddhism in the decade 1960, was a
proponent of the embodied philosophy (Varela, Depraz, 2001) which claims that human cognition
and consciousness can only be understood in terms of enactive structures in which body and
environment arose and interact (Varela, Thompson, Rosch, 1991). He made an impact on the
neuroscience researches by introducing concepts of neurophenomenology, based on writings of
Husserl, and Merleau Ponty, in which observers apply scientifically verifiable biological methods
for examining the nature of their own conscious experience.

In its School of Palo Alto, Gregory Bateson participated to the same mind revolution. He
helped to elaborate the cybernetics science, developing the "double bind" theory. Paul Watzlawick
and Ronald D. Laing, each one psychiatrist, have studied some mind processes and the paradoxes in
communication. They looked at parallels between mind and natural evolution and helped to
complexify the western thought.
But, the Western scientists accept Eastern knowledge only under the condition that those were
validated by experimental protocols. James H. Austin (2005) in his new book entitled “Zen-Brain
reflection”, asks: what is Zen ; How does human brain functions nowadays ; What really goes on
78
meditation and enlightened states? Thus, the access to validated reality, is done by the science
which takes a posture of superiority, affirming its predominance on Eastern philosophy. However,
the posture of interbreeding taken by scientists, like James H. Austin or David Servan Scheiber,
who, like true anthropologists, were initiated into Zen practice or into ayurvedic or Chinese
traditional medicine enables them with double competence to test the Eastern thought in their
experiment of life. Conversely the Eastern Buddhists and Taoists have been initiated, long time ago,
with modern science.
As Austin writes: “You cannot probe deeply into phenomena of such states without
incovering properties of fundamental neurobiological significance. Indeed, one finds that meditation
experiences not only are “reflected” in the findings of neuroscience but that these two fields are so
intimately interrelated that each illuminate the other” (2005,23.)
Thus the dialogue between Buddhism and science exists today. It is within this framework
that we make our presentation. We compare a Taoist Chinese precept relating to the body
inscription of self-knowledge and perception of the world, with a neurophysiological knowledge
from Austin (2005) and David Servan Schreiber (2003) supported by our clinical results on Qi cong
practices.
We conclude that the dialogue between philosophy and science will improve in several ways
as exempled in alternative welfare body techniques and medical treatments.

References
Austin, J. M. (2005). Zen-Brain reflection. MIT Press, London.
Capra, F. (1988). Sagesse des Sages, conversations avec des personnalités remarquables. Paris, L'Age du verseau
Belfond (pp332).
Midol, N., & Hu, W. (2006). Le corps dans la pensée traditionnelle chinoise. In Andrieu, B. Le dictionnaire du corps,
(pp89-90), CNRS Éditions, Paris.
Servan Schreiber, D. (2003). Guérir. Robert Laffont, Paris.
Varela, F., & Depraz, N. (2001), Imagining: Embodiment, Phenomenology and transformation. In: Wallace A, Breaking
the ground : Essays on Tibetan Buddhism and the Natural Sciences (pp120-145), Columbia U.P.
3rd International Conference on Enactive Interfaces (Enactive /06) 79
Haptic exploration of buildings for visually impaired persons:
Show me the map

Luděk Pokluda & Jiří Sochor


Faculty of Informatics, Masaryk University, Czech Republic
xpokluda@fi.muni.cz

In this study we focus on a technique designed for navigation of visually impaired persons in a
building. A gesture based system with a force-feed back device (3 DOF) is used to help them to
familiarise with the spatial layout of an interior. With the technique show me the map a user is led
through the map of a virtual building by a haptic device. The layout of a building is simplified to a
symbolic map consisting of gestures representing the important and interesting features of the
building. In our experiments, we tested the understanding of the gestures used in maps and the
understanding of the overall building layout.

Introduction
Using the force feed back device (PHANToM, by SeansAble inc) for visually impaired
persons was one of first ideas of usage of it. Fritz (1996) describes the possibilities of haptic
rendering and uses the device for helping visually impaired. Kurze (1996; 1998) created the system
for drawing by visually impaired. He also notices how the drawing of the 3D object differs from
drawings made by sighted. Visually impaired tend to describe an object as a set of simple shapes
(circles, squares). We utilise this fact in a technique called show me the map.

Show me the map


The approach of the technique show me the map is similar to a situation when visually
impaired person is let by a hand of a guide through a building. We simulate this condition by
movements of a device. User is holding end-effector of the force feed back mechanism and the
device exerts forces necessary to move along the vector from A to B. During the “replay” of a
vector the user is able to control the device via a keyboard similarly as controlling a tape recorder. It
can be stopped, paused, reverse direction of replay. The velocity of vector replay is stored with the
vector itself. It can be set in a file, but it is not changed during a replay phase. The map replayed to
user is a symbolic representation of building layout. It consists of gestures connected to each other.
The gesture can represent either the geometry of an object (the corridor is represented as a strait
line) or the processing information (start gesture mimics hand shaking by moving up and down).
The technique was expanded by the levels of detail (LOD). The LOD was implemented by creating
several maps of the same interior with different details. In the full detail level the geometry of the
rooms and corridors were shown exactly. In the middle detail level was each room substituted by a
gesture (circle) shown at the entrance to the room. The map in the lowest level of details shows only
the entrance to the rooms (by movement in direction of a room and back to the corridor).

Figure 1. The map of a building shown as a set of gestures via force feed back device.
80
Experiment
For the experiment testing the technique we created a building with two floors and six rooms
(see Figure 1). It was synthetic building without the real building as a pattern. We created three
different maps of the same building with the different level of details. Ten participants were
involved in the experiment (six males and four females). Before the experiment they filled the
questionnaire about their skills and abilities. Their age varied from 17 to 22 (average 19.2). Nine of
them were congenitally blind and one has residual sight. They were skilled computer users using
emails and web every day. They have little experiences with the Virtual Reality.

Results
During the experiment the passes of the map were recorded. After each pass the participant
was asked about the number of floors of the building and the number of rooms in each floor and
their relative positions (on which side of corridor they lay and how far they are from each other).
When the participant was able to describe whole building correctly the experiment ends. Most of
the participants were able to understand the building layout. They need from 6 to 9 (average 8.2)
replays of the map to recognize the layout. One participant wasn’t able to describe the layout of the
building at all. The map was replayed with a different level of details. At the end the participant
chose the level of details he preferred. Two participants (20%) chose full level of details, two
participants (20%) chose lowest level of details, four participants (40%) proposed the scenario with
lowest level of details for first few passes and full level of details later. The rest (20%) wasn’t able
to choose preferred level of details. Participants also commented the sizes of the rooms shown on
the map with full details. Only one participant (10%) was able to recognize sizes correctly. Three
participants (30%) made one mistake. One participant (10%) did two mistakes and five participants
(50%) weren’t able to describe the relative sizes of rooms at all. The stair gesture was immediately
recognized by all participants. They all counted the number of floors correctly.

Conclusion
From the results we can have some insight on the usability of level of detail of representation
of a building. The results show us that middle level of details with the substitution of room
geometry by a gesture wasn’t usable at all. Most participants liked to start the layout description
with lowest level of details and then change the level to the full detail. Playing gestures in forward
and backward order is important. Quite a lot of participants failed to notice the room which is on the
opposite site than other three rooms in the first floor when gesture map was played in forward
direction. They were surpriced when whole layout was played in reverse direction and the phantom
device moved into direction where, by participant describtion, nothing should be. Other possibility
is describing the left part of a building (passing around the left wall) and then the right part. We
didn’t tested this approach with visually impaired participant yet. The issue arising from this two
map approach is whether the mental map of this two parts of a building will be well connected
together – especially in the case of two rooms on exactly opposite side of the corridor.

References
Fritz, J. P. (1996). Haptic rendering techniques for scientific visualization. MSEE Thesis, University of Delaware.
Kurze, M. (1996). Tdraw: a computer-based tactile drawing tool for blind people. In Assets ’96: Proceedings of the
second annual ACM conference on Assistive technologies, pages 131–138, New York, NY, USA. ACM Press.
Kurze, M. (1998). Tguide: a guidance system for tactile image exploration. In Assets’98: Proceedings of the third
international ACM conference on Assistive technologies, pages 85–91, New York, NY, USA. ACM Press.
Williams, R. L., Srivastava, M., Conatser, R. R. J., & Howell, J. N. (2004). Implementation and evaluation of a haptic
playback system. Haptics-e: The Electronic Journal of Haptics Research, 3(3).
3rd International Conference on Enactive Interfaces (Enactive /06) 81
Using a scripting engine to facilitate the development
of force fields in a distributed haptic setup

Chris Raymaekers, Tom De Weyer & Karin Coninx


Expertise Centre for Digital Media and Transnationale Universiteit Limburg
Hasselt University, Belgium
chris.raymaekers@uhasselt.be

Enactive interfaces often make use of force feedback devices. In order to efficiently use the
available processing power, haptic rendering can be offloaded to another computer. However, when
the force feedback algorithms themselves must be extended, a distributed setup can be problematic
as the server must be recompiled. In this paper, we elaborate on a system, built on top of VRPN,
that allows to script force fields on the client side and execute them on the server side without a
significant processing penalty.

Introduction
Force feedback is an important modality when developing an enactive application. However,
the haptic rendering of a virtual world can consume a big part of a computer’s processing power.
Therefore, the haptic loop is often distributed to a second processor or even to a different computer.
In our research, we make use of the Virtual Reality Peripheral Network (VRPN) library (Taylor et
al, 2001) in order to distribute the haptic loop to another computer. This has the additional
advantage that all I/O device can be addressed in a transparent, machine-independent manner.
In this paper, we shortly describe how VRPN can be used to distribute haptic rendering of a
virtual world. Next, we elaborate on the difficulties that arise when extending force fields within
VRPN and present a solution for this problem. Finally we give an example of a force field that was
realised through our setup and we draw some conclusions.

Virtual Reality Peripheral Network


The VRPN library allows developers to make use of several I/O devices without having to
worry about network details or device drivers (Taylor et al, 2001). Each device that the developer
wants to use must be connected to a VRPN server, which has the right driver for this device. Clients
can connect to a VRPN server through one or more abstract device interfaces. For instance, a force
feedback device has three abstract device interfaces: it has a Tracker interface in order to notify a
client of its position and orientation; its Button interface notifies a client if one of its buttons is
pressed or released; and clients indicate how forces are sent to the force feedback using its
ForceFeedback interface.
In its current status, VRPN can use the GHOST haptic library in order to use the PHANToM
device as haptic device. In order to calculate the forces, the client application must indicate which
objects are present in the virtual scene. Furthermore, the client application can also activate
different force fields.
In our research, we make use of different force fields in order to achieve a better interaction
(Vanacken et al., 2006). We found, however, that this is difficult to realize through VRPN as it
requires adding the new force field to VRPN and recompiling the server. As a VRPN server is often
used by several people, this is not only cumbersome, but can sometimes be infeasible. We therefore
developed a new force field, which can be scripted. This force field only needs to be compiled once
and can be used to generate an arbitrary force field. The next section discusses this scriptable force
field.
82

function ThisField.forcefield()
x, y, z = ThisField:GetPhantomVel()
xp = ThisField:GetParameter(1)
ThisField:SetParameter(1,x)
multiplyfactor = 0.003
if (xp*xp < x*x)
multiplyfactor = 0.001 --- use less force
end
x=x*-multiplyfactor;
y=y*-multiplyfactor;
z=z*-multiplyfactor;
ThisField:SetForce(x,y,z)
end

Figure 1. Scriptable force field architecture Figure 2. Example force field

Scriptable force field


In order to be able to make use of arbitrary force fields, we created a force field that uses the
Lua scripting engine (Ierusalimschy & de Figueiredo, 1995) on the server side to execute scripts,
created on the client side.
Figure 1 shows how the scriptable force field is implemented on top of VRPN. First, a
developer must create the force field in Lua script. Next, the client application uses VRPN’s remote
force feedback device interface in order to connect to the server. The script can then be sent to the
server by a client version of the scriptable force field using the VRPN communication protocol.
This script is then compiled on the server side through the Lua compiler. The compiled script can be
executed within the haptic loop by the server version of the scriptable force field. As the Lua script
is compiled before being used in the haptic loop, its execution speed is comparable to that of the
compiled C++ code of VRPN.
The Lua scripting interface exposes the following functionality to the force field scripts: the
force feedback device’s position, orientation and velocity can be interrogated and the script can
send forces to the device. Furthermore, it is possible to save variables and to use them during a
future execution of the haptic loop. Figure 2 shows an example force field in Lua script. This force
field implements a drag effect, which is diminished when the pointer accelerates in the X-direction.

Conclusions
In this paper, we presented a system for developing and deploying force fields in a distributed
setup. We showed how Lua scripts can be efficiently used in combination with VRPN in order to
create force fields that can be executed on a different computer.

References
Ierusalimschy, R., de Figueiredo, L. H., & Celes, W. (1996). Lua-an extensible extension language. Software: Practice
& Experience, 26(6), 635-652.
Taylor, R. M., Hudson, T. C., Seeger, A., Weber, W., Juliano, J., & Helser, A. T. (2001). VRPN: A Device-
Independent, Network-Transparent VR Peripheral System. ACM Symposium on Virtual Reality Software &
Technology 2001, pp. 55-61. Novermber 15-17, Banff, Canada.
Vanacken, L., Raymaekers, C., & Coninx, K. (2006). Evaluating the Influence of Multimodal Feedback on Egocentric
Selection Metaphors in Virtual Environments. First International Workshop on Haptic Audio Interaction Design.
August 31 – September 1, Glasgow, United Kingdom.

Acknowledgements
Part of the research at EDM is funded by ERDF (European Regional Development Fund), the
Flemish Government and the Flemish Interdisciplinary institute for Broadband technology (IBBT).
The Network of Excellence “Enactive Interfaces” (FP6-IST 002114) is funded by the European IST
objective.
3rd International Conference on Enactive Interfaces (Enactive /06) 83
The theatrical work of art: a collection of man-machine interactions?
Francis Rousseaux
IRCAM, France
CRéSTIC, URCA University, France
francis.rousseaux@ircam.fr

How should we consider the role of information technology in art, especially in the fields of
performance such as music, theatre, and opera? We voluntarily use the common definition for
information technology: a tool that facilitates existing practices or what one would usually call the
automation of these activities. We will consider these approaches, questioning the idea of creativity
in a man-machine context: the creativity of the machine, the creativity of the user who has new
artistic tools available to him through the computer, and creativity in the domain of form through a
man-machine dialogue. We have identified three categories of application, which we have named,
making them easily identifiable:
- “When art brings information technology into play.” This category brings together
productions and research in which computer algorithms are transcribed to artistic media. Above all,
the generative functions of the computer are put to use in this case. In the interactive production of
Norma by Bellini (2001), forced to constantly find different solutions for the problem of inequation
that defines the conventions of stage design, we employed a constraint solver. These solutions were
then brought to the stage through the apparition of characters on a large screen-mirror. In this way,
the computer’s own functioning challenges the idea of creativity by allowing the computer to make
“suggestions” that we then can consider as having more or less artistic merit.
- “When computer systems question artistic practices.” This second approach questions the
statue of the artistic computer-system user where the tools used by the artist are easily available,
either in part or in whole. Certain software programs and CD-ROMs question the notion of musical
composition and when using them one sometimes wonders who the composer actually is. For
example, the CD-ROM Prisma dedicated to the composer Kaija Saariaho, imagines composition
like an interactive game: the work, written initially for flute and cello, was cut into small fragments
that can be re-organized by the user through simple steps, although the computer prohibits certain
combinations.
- “When theatrical works are made up of man-machine interactions.” The third approach
addresses the issue of the creation of a form through a man-machine dialogue. It plays a part in
performances based on actor/singer-computer relationships and for that it relies on the support of
the broadening of design or production parameters. In the production of La traversée de la nuit
performed in 2003 at the Centre des Arts d’Enghien-les-Bains, the man-machine dialogue was a
central element of the play’s interactive stage design. The actress’ voice was analyzed by a network
of neurones coupled to a multi-agent image-generating system that, in turn, affected the actress’
performance. This created the performance’s design, which was like a collection of interactions.
This type of performance goes, a priori, beyond stage design ontologies.

This last approach places the rich, but problematic, notion of the collection in the forefront. A
collection is different from a list, an assembly, a class, a series, a stack, a group, and a mass. It is
different from clutter, jumble, and also from everything that is organic, from lineage and family,
from the troop and procession by its reception figure. It cannot present itself as a coherent whole,
apart from its simplistic management system. What are the consequences of the implementation of
this kind of system? How to give way to the schizophrenic implementation of a system/reception of
interactive contents?
We will demonstrate the systems that we have created and presented to the public in the
domain of man-machine theatrical performances.
The final objective is to promote the collection category and to reinvest an original
epistemology of enaction.
84
3rd International Conference on Enactive Interfaces (Enactive /06) 85
Haptic interfaces for collaborative learning among sighted
and visually impaired pupils in primary school

Eva-Lotta Sallnäs, Jonas Mol & Kerstin Severinson Eklundh


Human-Computer Interaction Group, School of Computer Science and Communication
Royal Institute of Technology, Sweden
kse@nada.kth.se

Introduction
In the study presented here two software prototypes for learning about geometrical concepts in
primary school were evaluated. Both prototypes have a combined haptic and visual user interface,
so as to support visually impaired pupils in group work together with sighted classmates. The aim
of the prototypes was to facilitate the visually impaired pupils’ inclusion in school. The overall goal
of the study was to evaluate the collaboration in a shared haptic environment regarding usability,
interaction, learning and inclusion in a group work process with visually impaired and sighted
pupils. The prototype and the evaluation are described in more detail in Moll (2006).

Haptic interfaces for collaboration and learning


Haptic interfaces, where software output is mechanically sensed rather than viewed on a
screen, are especially important when it comes to visually impaired users. Sjöström (2002) presents
important guidelines for design of haptic interfaces and shows how haptic interfaces support blind
students in learning mathematics. As far as we know there has only been one study involving blind
and sighted users in collaboration, Sallnäs et al (2006). In that study three haptic prototypes were
evaluated with pairs including one blind and one sighted user.

The prototypes
In both prototypes a haptic feedback device, “Phantom”, is the means for user interaction.
Either prototype may be used by two pupils, each with one Phantom, in collaboration.
The first prototype developed is a three-dimensional environment that supports collaborative
learning of spatial geometry. The scene is a ceiling-less room with a box on the floor containing
geometrical objects, such as sphere, cube, cone and cylinder, which can be picked up and moved
around by means of the Phantom. In this prototype the pupils can feel and recognize the different
geometrical shapes and they can also use the objects from the box to build composed objects. Apart
from feeling and manipulating objects users can feel and grasp each other´s graphical
representations to provide navigational guidance to a visually impaired fellow. Figure 1 shows a
screenshot from this prototype, the room being viewed from above with the box in the far left
corner.
In the other, two-dimensional, prototype, the scene is a board on which one can feel angles
and geometrical shapes of drawn figures. Figure 2 shows the idea, where the pencil-like objects are
the graphical representations of the users. The figures are created and manipulated by the teacher in
another prototype not described here.

Figure 1. The 3D-environment Figure 2. The 2D-environment


86
The evaluation
The participants in the evaluation were selected from four different primary schools in the
Stockholm vicinity. In total, twelve pupils participated in the evaluation in four groups of pupils
with one visually impaired and two sighted pupils (one of which was engaged in turn) in each. The
pupils were all 11-12 years old. The evaluation was divided into five parts with two training
sessions, two group task sessions and finally one interview session with all three pupils together
after they had performed the tasks. The main task in the two-dimensional prototype was to
categorize the angles of a polygon (see Figure 2). The main task in the other prototype was to cover
a certain rectangular area by combining cubes (Figure 1).

Major results and Conclusions


One conclusion of the evaluation is that if objects can be moved around in a collaborative
environ-ment, the collaboration gets more complex for the visually impaired pupils. Their task
involves co-ordinating moving objects rather than solving the math problem. Some additional
support is needed in order for the impaired pupil to be aware of changes. The combined navigation
and object hand-ling seems, however, not to be a problem either in the “Angle” nor in the “Cube-
moving” environment.
The Angle prototype supported collaboration very well whereas the Cube-moving prototype
turned out more problematic yet with great potential. In the latter, the pupils used the haptic
navigational guidance function and two groups discussed quite a lot during the work process. Most
of the groups, however, had problems coordinating their work because the visually impaired pupil
could not keep track of all the changes in the environment. In two of the groups the visually
impaired participant also accidentally destroyed what the group had built. The result shows that the
characteristics of this type of learning environment affect the dynamics of the group work
significantly. The technology may easily become an obstacle instead of a supporting tool.
The results make it clear that this type of haptic and dynamic interface supports social
learning. Social learning is “learning throw dialogue” and in most of the groups the participants
were highly engaged in discussions regarding categorization of figures and problem solving
strategies. Since constructive learning is “learning by actively manipulating or constructing
knowledge of the world” (Säljö, 2000) it is clear that the Cube-moving prototype supports this kind
of learning. According to this definition it seems that touch modality is a key means for constructive
learning – the blind user actively constructs his/her understanding of a geometrical concept by
moving around the haptic device. So, the Angle prototype supports this kind of learning as well.
Finally, the results make it clear that a shared work process can be obtained in these types of
interfaces, by which visually impaired pupils through team-work with their sighted fellows can
form an understanding of the workspace layout. It is possible to discuss a math problem involving
geometrical concepts together. The shared interface makes possible a problem-solving process
where everyone is included. However, the design of the multimodal environment affects the level of
inclusion in a powerful yet subtle way, which makes it very important to explore further how to
design a truly inclusive environment.

References
Moll, J. (2006). Utveckling och utvärdering av programvara till stöd för lärande under samarbete mellan seende och
synskadade elever i grundskolan. [In Swedish]. Master thesis, Royal Institute of Technology, Stockholm.
Säljö, R. (2000). Lärande i praktiken – ett sociokulturellt perspektiv. Stockholm: Prisma.
Sallnäs, E.-L., Bjerstedt-Blom, K., Winberg, F., & Severinson Eklundh, K. (2006). Navigation and control in haptic
applications shared by blind and sighted users. Workshop on Haptic and Audio Interaction Design, University of
Glasgow, 31st August – 1st September 2006.
Sjöström, C. (2002). Non-visual haptic interaction design – Guidelines and applications. Doctoral thesis, Certec, Lunds
Tekniska Högskola.
3rd International Conference on Enactive Interfaces (Enactive /06) 87
Multimodal Interfaces for rehabilitation: applications and trend
Elisabetta Sani, Carlo Alberto Avizzano, Antonio Frisoli & Massimo Bergamasco
Perceptual Robotics Laboratory, Scuola Superiore Sant’Anna, Italy
e.sani@sssup.it

The presentation provides analyses and insights concerning the sector of Multimodal Interface
for applications on medical rehabilitation. A particular attention is given to the research developed
by the Enactive Network, with a special focus on the interfaces for teaching and learning manual
tasks and special users and uses. Which is the reference context and the trend followed by the
sector? Who leads the market? Which are the main difficulties faced by Institutions which operates
in this field? How faced them?

Introduction
Bringing research results out of the Research Institution and into an industrial context is
always a challenge that should be pursued by every laboratory (Mowery, 2001). In case the process
eventually leads to success on the market it is usually called innovation (Donald S., Siege B.,Van
Pottelsberghe de la Potterie, 2003).
Innovation is the introduction of new processes, new ways of doing things and revolutionising
how things have been accomplished previously. Innovation is not just about improvement; it is
much more than that (Christensen, 2004). Improvement is taking what it has already been done and
making it better, whereas innovation is about discovering new methods and making changes.
When innovation and the relative technology, can improve the status of human beings it
assumes a fundamental role which cannot be ignored. This is especially the case of medical
multimodal interfaces used for rehabilitation. Rehabilitation technology services represent the
systematic application of technologies, engineering methodologies, or scientific principles to meet
the needs of and address the barriers confronted by individuals with disabilities in areas which
include education, rehabilitation, employment, transportation, independent living, and recreation.
The term includes rehabilitation engineering, assistive technology devices, and assistive technology
services (John Clarkson, Bob Dowland and Roberto Cipolla, 1997).
In such cases, it is necessary to better define the context for identifying the potential uses and
applications, by analysing also the economic activity induced by the relative sector.
Every year, indeed thousands people suffer permanently disabling, but nonfatal injuries, to the
brain or spinal column. Many victims are young, just beginning their lives, and have much to offer
society. Research institutions who are involved in the development of medical devices for
rehabilitation and want to have them used properly recognize the importance of such rehabilitation
on the lives of people with disabilities.
The European Commission currently support a variety of projects related to medical devices
for rehabilitation in its framework programme which is a strong indicator for the importance of the
dissemination of these technologies. Particular attention to this thematic is also be given by the
research carried out within the Enactive Network in specific activities dealing with teaching and
learning manual tasks. Enactive knowledge is not simply multisensory mediated knowledge,
moreover knowledge stored in the form of motor responses and acquired by the act of "doing". A
typical example of Enactive knowledge is constituted by the competence required by tasks such as
driving a car, walking, dancing, playing a musical instrument, which would be difficult to describe
in an iconic or symbolic form (Bergamasco, 2005). This type of knowledge transmission can be
considered the most direct, in the sense that it is natural and intuitive, since it is based on the
experience and on the perceptual responses to motor acts.
The technology generate within the framework of ‘learning and teaching manual tasks’ and
‘special users and uses’ can be considered a potential input to study in detail, by considering the
ongoing development of the use of multimodal interfaces for rehabilitation.
88
Description of the presentation
The presentation analyses the context and the trend of these devices by taking into account the
main factors which can identify the targeted sector. The goals of the presentation are to advance the
understanding of medical rehabilitation device market for improving the conditions to implement
possible new service/product or boost the existing ones, enhancing in this way the health.
Moreover it would be interesting to discuss the aspects of potential applications in order to
judge the likelihood that the proposed research analysis will have a substantial impact on the pursuit
of specific goals. The main targets are customers and technologies, which are going to introduce or
implement rehabilitation device. The needs of users are of interest only in so far as the customer
understands them. This sentence might look a little paradoxical, but it reflects the sometimes very
different motivation and scope of work of customers and users.
This presentation therefore analyses the trend of multimodal interfaces for rehabilitation that
meet the user needs in order to identify the market niches addressed and to address. The report is
articulated in a brief introduction, followed by a detail description of the background which
analyses the reference context of rehabilitation devices. Data obtained from publication and patent
database help the analysis in order to understand the interest and the evolution which characterize
the sector. Fundamental factors related to rehabilitation medicine devices such as the costs, the
speed-up of processes, the quality of improvement, the usability and uniqueness are monitored.

References
Bergamasco, M. (2005). Tutorial on Multimodal interfaces: ENACTIVE systems. Eurographics 2005, 29 august- 2
september, Trinity College Dublin, Dublin, Ireland.
Christensen, C. M. (2004). The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail.
(Management of Innovation and Change Series). Harvard Business School Press.
Clarkson, J., Dowland, B., & Cipolla, R. (1997). The use of prototypes in the design of interactive machines.
International Conference on engineering design (ICED 97), 2/735-2/740, 19-21 august, Tampere, Finland.
Donald, S., Siege B., & Van Pottelsberghe de la Potterie, B. (2003). Symposium Overview: Economic and Managerial
Implications of University Technology Transfer (Selected Papers on University Technology Transfer from the
Applied Econometrics Association Conference on ‘Innovations and Intellectual Property: Economic and
Managerial Perspectives’), The Journal of Technology Transfer, page. 5-8, Vol. 28, Number 1, January.
Mowery, D. C. (2001). The Changing Role of University in the 21st Century U.S R&D System. The 26th Annual
Colloquium on Science and Technology Policy, May 3-4, Washington, DC.
3rd International Conference on Enactive Interfaces (Enactive /06) 89
Dynamical and perceptual constraints on interpersonal coordination

Richard Schmidt
Psychology Department, College of the Holy Cross, USA
rschmidt@holycross.edu

Enactive interpersonal interactions are different from interactions with other environmental
objects in at least two respects. First, the interactions are characterized by goals that are dyadically
rather than individually defined. Second, these environmental interactions have a dynamic circular
causality; that is, actions of one organism lead to changes in the actions of the other and vice versa.
Whereas the coordination that occurs between two people when they are dancing or moving
furniture together is obvious, intentional, and strongly task constrained, research has found also
unintentional movement coordination in more psychological interactions such as conversations
(Davis, 1982). This talk will review some of the processes involved in these interpersonal enactive
interactions.

The influence of dynamical processes of self-organization


Past research has revealed that general dynamical entrainment principles underlie both
intentional and unintentional interpersonal coordination when visual information about each
person’s movements is available as a coupling medium. A series of interpersonal coordination
studies in which two individuals were required to intentionally coordinate rhythmic movements
(e.g., Schmidt et al., 1998) found that the coordination dynamic used to model rhythmic interlimb
coordination of a single person (Kelso, 1995) could be used to model the interpersonal coordination
of limbs. This research has shown that two patterns of coordination, inphase and antiphase (0° and
180° relative phase) can easily be maintained through visual coordination although antiphase is less
stable and may breakdown at higher frequencies (Schmidt et al., 1990). If the difference in the
inherent frequencies of the two effectors is increased (e.g., by making the hand-held pendulums
more different in their lengths), the relative phasing of the movements becomes more variable and
exhibits a proportional lagging of the inherently slower (larger) effector (Schmidt et al., 1998).
Other research examined whether this entrainment dynamic would produce unintentional
coordination and operate outside the participant’s awareness as in naturalistic interpersonal
synchrony. Richardson et al. (2005) required a pair of participants to discriminate together the
differences in the features of two similar cartoon faces while simultaneously swinging hand-held
pendulums as a ‘distracter’ task. The cartoon faces were displayed either on the ends of their
partner’s moving pendulum or on stands positioned parallel to the outer side of each chair. The
movements of the participants became entrained (higher cross-spectral coherence and a greater
inphase and antiphase relative phase angles) when visual information about their partner’s
movements was available. Consequently, the results of these studies support the claims that both
intentional and unintentional interpersonal coordination can result from a dynamical entrainment
process and that a visual informational medium can sustain such a self-organizing mode.

The influence of perceptual pick-up rhythms


Quite apparent during the just described interpersonal coordination studies is a participant’s
on-going visual tracking of their partner’s movements. The idea that perceptual systems are, active,
exploratory and intrinsically related to action coordinated with the environment has received
significant attention (Gibson, 1962; Yarbus, 1967). However, in spite of this research, we know
little about the affects that the observable motor activity of the perceptual system has on assembling
on-going actions coordinated with the environment. The question is whether the ocular activity
apparent in interpersonal coordination needs to be conceptualized as part of the dynamic
synchronization process that emerges in the experiment. To understand this, Schmidt et al. (in
press) investigated whether rhythmic visual tracking of a stimulus increases the unintentional
entrainment of a person’s movements to that stimulus. Two experiments were conducted in which
90
participants swung a hand-held pendulum while tracking an oscillating stimulus or while keeping
their eyes fixed on a stationary location directly above an oscillating stimulus (Figure 1).

0.5
H

Mean Coherenc
0.4
Chin
rest 0.3
Non-Tracking Condition
H
0.2

0.1

H 0
Tracking Non-tracking Control
Oscillating
Stimulus Tracking Condition

Figure 1. Environmental coordination paradigm Figure 2. Effect of visual tracking on


used for studying visual tracking unintentional interpersonal coordination

Results demonstrated that the participants became more unintentionally entrained to the
stimulus in the tracking condition than in the non-tracking condition but still exhibited some phase
entrainment near 0° and 180° for the non-tracking condition. These findings suggest that visual
entrainment is facilitated when an individual’s visuo-motor system is actively engaged in detecting
the relevant visual information.
A follow-up study investigated whether this result which used an environmental entrainment
paradigm generalizes to interpersonal rhythmic coordination by using the dyadic problem solving
paradigm of Richardson et al. (2005) but with the addition of a non-tracking condition. In the visual
tracking condition, the cartoon pictures were displayed on a stand attached to the end of the
pendulums, such that each participant must track the motion of their co-actor’s movements in order
to complete the dyadic task. In the non-tracking condition, the pictures were displayed on a floor-
stand positioned directly behind the motion of their co-actor’s pendulum. Results (Figure 2) indicate
that the movements of the participants become more strongly entrained when visual tracking of
their partner’s movements is required to complete the dyadic puzzle task.
The results of both these studies suggest that the dynamical, interpersonal ‘coordinative
structure’ we have been studying over the past two decades needs to be conceptualized as
containing not just the limbs of two people coupled via a passive visual coupling but as the limbs of
two people coupled via an active, visual information pickup dynamic which is intrapersonally
coupled to the individuals’ limbs.

References
Davis, M. E. (1982). Interaction rhythms. New York, Human Science Press.
Gibson, J. J. (1962). Observations on active touch. Psychological Review, 69, 477-491.
Kelso, J. A. S. (1995). Dynamic patterns. Cambridge, MA, MIT Press.
Richardson, M. J., Marsh, K. L., & Schmidt, R. C. (2005). Effects of visual and verbal information on unintentional
interpersonal coordination. Journal of Experimental Psychology:Human Perception and Performance, 31, 62-79.
Schmidt, R. C., Bienvenu, M., Fitzpatrick, P. A., & Amazeen, P. G. (1998). A comparison of intra- and interpersonal
interlimb coordination: Coordination breakdowns and coupling. Journal of Experimental Psychology: Human
Perception and Performance, 24, 884-900.
Schmidt, R. C., Carello, C., & Turvey, M. T. (1990). Phase transitions and critical fluctuations in the visual
coordination of rhythmic movements between people. Journal of Experimental Psychology: Human Perception
and Performance, 16, 227-247.
Schmidt, R. C., Richardson, M. J., Arsenault, C. A., & Galantucci, B. (in press). Visual tracking and entrainment to an
environmental rhythm. Journal of Experimental Psychology: Human Perception and Performance, (in press).
Yarbus, A. L. (1967). Eye movements and vision. New York, Plenum Press.
3rd International Conference on Enactive Interfaces (Enactive /06) 91
Auditory navigation interface featured by acoustic sensitivity common to blind
and sighted people

Takayuki Shiose1, Shigueo Nomura1, Kiyohide Ito2, Kazuhiko Mamada3,


Hiroshi Kawakami1 & Osamu Katai1
1
Graduate School of Informatics, Kyoto University, Japan
2
Future University-Hakodate, Japan
3
School for the Visually Impaired, University of Tsukuba, Japan
shiose@i.kyoto-u.ac.jp

Introduction
This paper describes which auditory information affects the accuracy of “perception of
crossability” for both the blind and the sighted people. “Crossability” is a sense of perceiving the
capability that a person can cross a road without being hit by a vehicle. In our previous work
(Shiose et al., 2005), we demonstrated that it is possible to clearly distinguish subjects who can
understand road situations accurately and those who cannot by focusing on individual variations of
the acoustic clues they use. We have created a “virtual 3D acoustic environment” to help
pedestrians with severe visual impairment train to cross roads safely, in which they can hear moving
sound sources as if vehicles were passing in front of them. The system is theoretically based on
acoustic ``time-to-contact'' information, which is the most important concept in Ecological
Psychology. Those results led us to focus on accurate perception of speed for accurate
understanding of the traffic situation. Additionally, the sensitivity of different speed conditions
common to the blind and the sighted people hinted to propose a new acoustic navigation interface
enables us to perceive the environmental information intuitively.

Experimental results and discussions


Our systems were composed of three audio recorders, two 3D sound-space processors and a
headphone system. The main components of the system, a virtual 3D sound space processor, can
simulate various acoustic conditions including reflected sounds, reverberations, and the Doppler
Effect. Figure 1 shows the experimental setting. First, each subject was instructed to localize where
the white cane made knocking noises, after which the sound source was set to pass about 2 m in
front of the subject at a certain speed, which was randomly changed from 10 to 40 km/h. The
subjects were instructed to listen to the acoustic information and to click twice, clicking first when
they noticed the car sound passing over the white cane knocking point, and second when they heard
the sound of the passing car right in front of them. Note that sighted subjects were instructed not to
watch the screen.

3 .5

2 .5
T im e d iffe re ntia l [s]

2 a ll
b lind
sigh te d
1 .5 a nsw e r

0.5

0
10 15 20 25 30 35 40
C a r spe e d [km /h ]

Figure 1. Accuracy of locating moving sound source vs. change of moving speed

The subjects were instructed to locate the sound source moving at different speeds (from 10
km/h to 40 km/h at 5-km/h intervals, presented randomly) in 40 trials. We examined whether
92
subjects can accurately perceive speed by using only acoustic information. Thirteen sighted subjects
and thirteen blind subjects, aged between 21 and 50 years old, participated in the experiment. We
focused on how correctly subjects estimated the 8-m distance from a point where knocking noises
were made to the front of a subject.

Figure 1 shows the time differences between two clicks under different car speed conditions.
The correct answer varies in inverse proportion to the car speed as described by the red line. We
used a two-way ANOVA to compare means among groups. Post hoc analyses were performed with
the Bonferroni/Dunn procedure when the F ratio was significant at p < 0.05. The main effect of the
factor ‘Speed’ was significant (p < 0.01). On the other hand, the main effect of the factor “Visual
Impairment” was not significant (p=0.1058). The Bonferroni/Dunn test revealed the trend that there
were significant differences among different speed conditions at low speed (around 10km/h) and no
significant difference at high speed (around 40km/h). This indicates us that both blind and sighted
people tend to underestimate to a greater extent how long a car takes to arrive at the spot the faster it
moves. There was no significant interaction effect (p=0.0996), here, although only in cases of blind
people, differences between estimated times for different car speed conditions becomes smaller if
the car moves faster.

Proposition of an auditory interface


Based on these results, we can propose a new Enactive Interface for driving a car, which
reduces a driver's load of visual cognition and assists visually impaired people in driving. It consists
of ultrasonic sensors on a vehicle and the virtual 3D acoustic environment system. Sensors measure
distances between the vehicle and walls in all directions, and tell them to the system. Then, the
system creates virtual sound sources at positions of the walls, and makes a driver feel as if it is
making a sound on the walls. A driver can estimate the positions of the walls by locating sound
sources. We carried out a preliminary experiment about this interface. In the preliminary
experiment, subjects were instructed to tell a change of a road environment by only listening virtual
3D sound at a position of a vehicle which moved at 4km/h. As a result, subjects often predicted a
change of a road width, and a time for prediction is between 0.5 seconds and 2.9 seconds. Subjects
predicted a change in 90% of cases in which a vehicle went into a narrow road from a wide one, and
in 30% of opposite cases.

PC
Audio Line
Control Line
U -88
MIDI

Sound Space
Processo
Audio Recorder

Audio Recorder Mixer Sound Space


Processo
MIDI
Audio Recorder
Sound Space
Processo

Sound Space
Processo

Sound Space
Processo

Sound Space
Processo
Power Amplifier

Power Amplifier

Power Amplifier

Figure 2. Overview of a new enactive interface for driving a vehicle with ultrasonic sensors

References
Shiose, T., Ito, K., & Mamada, K. (2005). Virtual 3D Acoustic Training System Focusing on Differences among
Individual Clues of Crossing Roads. Proccedings of 2nd International Conference on Enactive Interfaces,
Genova, Italy, November 2005.
3rd International Conference on Enactive Interfaces (Enactive /06) 93
Stabilization of posture relative to audio referents
Giovanna Varni1, Thomas Stoffregen2, Antonio Camurri1,
Barbara Mazzarino1 & Gualtiero Volpe1
1
InfoMus Lab DIST, Università di Genova, Italy
2
Human Factors Research Laboratory, University of Minnesota, USA
giovanna@infomus.dist.unige.it

Introduction
Very little research has investigated the use of auditory patterns in the control of action. We
studied the ability of standing persons to control their medio-lateral orientation relative to dynamic
patterns in acoustic stimulation. We asked standing subjects to align their bodies with different
orientations in acoustic space. Body sway altered acoustic stimulation, providing dynamic
information about orientation relative to acoustic space. Dynamic acoustic patterns can influence
quiet stance (e.g., Chiari et al., 2005), and subjects can use body movements to track moving
acoustic stimuli (Kim et al., 2005). We varied the location of acoustic target orientations
independent of subjects’ orientation relative to kinaesthetic referents, that is, we decoupled acoustic
“upright” from gravito-inertial upright. At the same time, subjects needed to control their
orientation relative to kinaesthetic referents (the direction of balance) to avoid falling. Thus,
subjects were required simultaneously to control their orientation relative to two independent
referents.

Methods
Apparatus and stimulus generation
Subjects stood on a custom-built mechanical platform, and listened (via headphones) to
acoustic patterns generated by an acoustic synthesis engine. The platform consisted of 1) a
springboard (two parallel plywood platforms 610 mm x 320 mm x 22 mm connected by a pair of
wheels, allowing the superior platform to rotate +/- 13° in the medio-lateral plane), that could rotate
(under the feet) in the roll axis 2) compression springs that prevented “hard” contacts with the solid
substrate, and 3) a sensor that detected the angular position of the springboard (an optical encoder
sampled at 50 Hz).
Subjects were provided with real-time feedback about their dynamic orientation relative to
target orientations that were defined in auditory space. The acoustic signal consisted of a random
noise that was amplitude modulated using a pair of sinusoidal waves whose frequency changed
linearly from 1 Hz to 10 Hz and from 0.5 to 5 Hz respectively, depending on the angular position of
the springboard. The resulting pulsing audio signal was sent to left and right audio channels of
headphones. We mapped the loudness of these audio channels to the opposite direction with respect
to instantaneous platform position, so that when the platform was to the right of the target
orientation, audio feedback was provided to the left ear, and vice versa. The auditory feedback
synthesis engine was implemented by means of the open hardware and software platform EyesWeb
(Camurri et al., 2005).

Subjects
Ten healthy subjects (5 males, 5 females) participated, mean age was 27.1 years, mean height
170.4 cm, and mean weight 61.6 kg. Subjects had no balance deficits or hearing disorders.

Procedure
Subjects were asked to control their stance so as to minimize auditory feedback. On each trial,
the “silent point” of the left-right audio feedback function was set at a single position relative to the
direction of balance. There were seven silent points or target orientations: 0°, 3° right/left, 6.5°
right/left, 9° right/left with respect to the gravity direction. The silent point was actually a region +/-
1.2° from each target angle; within this region there was no audio feedback. There were three 60 s
94
trials for each orientation (total = 21 trials). Conditions were presented in random order. Subjects
began with a training session, during which they were informed of the target orientation before each
training trial. In the test phase, subjects were not told about target orientations in effect for each
trial.

Results
Subjects successfully controlled their medio-lateral orientation so as to maintain positions
within the 95% confidence interval for each target orientation. Achieved mean orientations did not
change significantly across trials (the three repetitions of each condition), indicating that any
learning was confined to the training session. For example, when the target orientation was 9° to the
left of the direction of balance, the mean achieved orientations were 8.56°, 8.71°, and 8.37° for the
first, second, and third repetition, respectively.
An ANOVA on mean achieved positions for each target orientation revealed a significant
main effect of Target Orientation, F(6,189) = 2616.34, p < .01 (Figure 1). Trials, and the Trials ×
Target Orientation interaction were not significant. Figure 1 makes clear that subjects could control
the orientation of the platform relative to the acoustic feedback while simultaneously maintaining
upright stance (relative to the gravito-inertial environment).

Figure 1. Mean (and standard error) platform positions (y-axis) as a function of target
orientations (x-axis).

Discussion
With minimal practice, subjects were able to orient their bodies relative to referents in
acoustic space, even when these referents diverged from the direction of balance. The results
suggest that healthy individuals can use acoustic stimuli to perceive and control bodily orientation
(Chiari et al., 2005; Kim et al., 2005). Having confirmed that acoustic stimuli can be used as an
independent source of information for the perception and control of orientation, we can now
proceed to investigate whether acoustic stimuli may be used to supplement non-acoustic
information. As one example, our system might be used to train blind persons to acquire sensitivity
to acoustic information as a substitute for visual information in the control of stance.

References
Camurri, C., De Poli, G., Leman, M., & Volpe, G. (2005). Communicating Expressiveness and Affect in Multimodal
Interactive Systems. IEEE Multimedia, 11, 43-53.
Chiari, L., Dozza, M., Cappello, A., Horak, F. B., Macellari, V., & Giansanti, D. (2005). Audio-Biofeedback for
Balance Improvement: An Accelerometry-Based System. IEEE Transaction on Biomedical Engineering, 52, 12,
2108-2111.
Kim, C., Stoffregen, T. A., Ito, K., & Bardy, B. G. (2005). Coupling of head and body movements to acoustic flow in
sighted adults. Poster presented at the XIII International Conference on Perception and Action, Monterey, CA.
3rd International Conference on Enactive Interfaces (Enactive /06) 95
Checking the two-third power law for shapes explored
via a sensory substitution device

Mounia Ziat, Olivier Gapenne, John Stewart, Charles Lenay,


Mehdi El Yacoubi & Mohamed Ould Mohamed
COSTECH, Department of Technology and Human Sciences,
Technological University of Compiègne, France
mounia.ziat@utc.fr

In this study, we mentions the first results concerning the validity of the 2/3 power law for
shapes explored by Tactos, a sensory substitution device.

Introduction
A fundamental law in the motor skills is the 2/3 power proportion between the movement
β 2/3
speed and the shape curvature and which is given by the equation V(t) = K R(t) or A(t) = K c(t)
(Where V(t) tangential speed, R(t) radius of curvature, A(t) angular speed, c(t) curvature, K a
constant and β a coefficient equal to 1/3 (in the reverse equation the value of coefficient is 2/3 from
where the name 2/3 power law). The first experiments (Binet & Courtier, 1893; Derwort, 1938;
Jack, 1895) on the co-variation between the speed and the path curvature in the tracing movements
underline the tendency of slowing the movement in the most curved parts of the path. Forty years
later, Derwort (1938) mentions an reverse relation between speed and curvature but the quantitative
study of this phenomenon began with the work of Viviani and Terzuolo (1982) which propose an
analytical formulation of the relation between speed V and radius of curvature R and which were
extended to complex paths (Lacquaniti, Terzuolo & Viviani, 1983). In this study, we verify if
strategies 1 used by subjects to identify shapes (here ellipses and circles) (Ziat & Gapenne, 2005), by
mean of a sensory substitution device (Tactos) (Lenay, Gapenne, Hanneton, Marque & Genouëlle,
2003), follow the 2/3 power law and thus preserve the properties of a gesture.

Procedure and Results


The data to analyse are files recorded by Tactos software at the time of the experiment. These
contain information about the subject’s trajectory on the shape such as: ms: the moment when the
coordinates are taken, PX: the X-coordinate of the trajectory at a given moment and PY: the
ordinate of the trajectory at a moment given. We used Mathematica software to process the data and
here the successive steps: Extraction of columns (ms, PX, PY), display the trajectory, calculation of
tangential speed V and the radius of curvature R, calculation of logarithms of V and R in order to
obtain the β value (Log(V) = Log(K) + βLog(R)) by making a linear regression.

Figure 1. Boxwhistlers for the three strategies (BoxPlot 1 represents β and BoxPlot 2
represents the RSquared.

1
We analyses only trajectories where subjects make either a continuous follow up or a micro-sweeping strategy (see
Ziat & Gapenne, 2005).
96
Figure 1 shows that the value of β is varying between 0.2 and 0.4 and it is closed to 0.3 (1/3)
for the continuous follow up with a correlation coefficient around 50% (indicated by 2 on each
graph). These first results are encouraging and we are still continue the analysis to improve the
value of β. Actually, we are smoothing the data in order to bring closer an irregular subject
trajectory to a natural movement (see figure 2). The closer value of β to 1/3 is obtained for a
continuous follow up which the closer strategy to a natural movement. It seems possible that the
fact of smoothing the data of the other strategies (micro-sweeping and axis with a continuous follow
up) improve the β values.

Figure 2. a) Real trajectory of a subject on a horizontal ellipse, b) the same path with a
smoothing (500 dots)

References
Binet, A., & Courtier, J. (1893). Sur la vitesse des mouvements graphiques, Travaux du Laboratoire de psychologie
physiologique. Revue Philosophique, 35, 664-671.
Derwort, A. (1938). Untersuchungen uber den zeitablauf figurierter bewegungen beim menschen. Pflugers Archiv fur
die gesamte Physiologie des Menschen und der Tiere, Jg., 240, 661-675.
Jack, W. R. (1895). On the analysis of voluntary muscular movements by certain new instruments. Journal of Anatomy
and Physiology, 29, 473-478.
Lacquaniti, F., Terzuolo, C., & Viviani, P. (1983). The law relating the kinematic and figural aspects of drawing
movements. Acta Psychologica, 54, 115-130.
Lenay, C., Gapenne, O., Hanneton, S., Marque, C., & Genouëlle, C. (2003). Sensory Substitution, Limits and
Perspectives. In Touch for Knowing, Amsterdam: John Benjamins Publishers.
Viviani, P., & Terzuolo, C. (1982). Trajectory determines movement dynamics. Neuroscience, 7, 431-437.
Ziat, M., & Gapenne, O. (2005). Etude préliminaire visant la détermination de seuils de confort pour un zoom haptique.
17th French-speaking conference on Human Computer Interaction (IHM’05), Toulouse, France (27-30
septembre, 2005), 3-10.
3rd International Conference on Enactive Interfaces (Enactive /06) 97

POSTERS
98
3rd International Conference on Enactive Interfaces (Enactive /06) 99
A classification of video games deduced by a pragmatic approach
Julian Alvarez1,2, Damien Djaouti2, Rashid Ghassempouri2,
Jean-Pierre Jessel1 & Gilles Méthel2
1
IRIT-SIRV, University of Paul Sabatier, France
2
LARA- Axe Arts Numériques, ESAV - University of Toulouse le Mirail, France
alvarez@irit.fr

The aim of this article is to present a classification of video games deduced by a pragmatic
approach. The methodology consisted in indexing a significant number of video games by taking
account their elementary “game bricks”. At last all these combinations have been studied in a
database called V.E.Ga.S.

Introduction
On the very first pages of his pioneer work, Propp (1928) postulates: "Although there is a
place for the classification as a basis of every research it must be the result of a further study. Or,
we observe the opposite situation: Most of researchers start by classifying, thus introducing facts,
when in fact, they should rather deduce."
Thus, in order to adhere to the "formal and abstract" study of Propp, we have chosen the
approach made by the game designers Katie Salen and Eric Zimmerman (2004) to study video
games. Because their "fundamental principles" are elements you can put together in order to
manage any game, they are similar to the "functions" of Propp, which are combined in order to
make up any tale. We only retained in our study "the fundamental principals" being in touch with
the "outside" as it defined by Winnicott (1971). At last as underlined by Salen and Zimmerman, we
have played the video games, because the theoretical approach is not sufficient: "A game design
education cannot consist of a purely theoretical approach to games. This is true in any design field."
(P.11).
Following this methodology we have elaborated V.E.Ga.S. (Video Entertainment & Games
Studies) tool (Alvarez, Djaouti, Gassempouri, Jessel & Methel, 2006). It is dedicated to the
morphologic study of video games in order to classify, study their very nature and corroborate
hypothesis in a pragmatic approach.
We will present in this paper the classification obtained by this way.

The classification
This experimental approach has done encouraging and coherent results particularly with the
discovery of 12 "game" bricks called "Answer", "Avoid", "Block", "Create", "Destroy", "Have
luck", "Manage", "Move", "Position", "Score", "Shoot", "Time" and a special one: "Toy" dedicated
to video games which not include challenges (fig.1). We have also discovered 4 "metabricks" called
"DRIVER", "KILLER", "GOD" and "BRAIN" and the 4 rules in association to them (Alvarez,
Djaouti, Gassempouri, Jessel & Methel, in press):

1) Are called "metabricks", the combinations of two game bricks supplementary that make a
challenge.

2) To add a game brick to a metabrick will give to the challenge carried by this one, a variant
which does not alter its very nature.

3) If we add several game bricks to a metabrick, the second rule is right as long as the
combinations of game bricks don't form another metabrick.

4) Associate the metabricks lead us to associate their respective challenge.


100

Figure 1. Summary of game bricks and metabricks identified at this stage.

We these elements, we can represent a first proposition of classification of the video games:

Figure 2. Classification of all the possible combinations of video games according to V.E.Ga.S.
(Aug. 2006)

The figure 2 shows us the 15 combinations we can make with the 4 discovered metabricks
(below the arrows). That represents 15 different challenges of video games. For each of them, some
game bricks can be added (above the arrows) to make variations of game.

Conclusion
Our study requires nevertheless a refinement concerning the definitions of our game bricks.
Some bricks still have too large definitions, like the "Answer" brick. We thus consider a second
version of our tool V.E.Ga.S.
First the quantity of games to index (588 at this stage) must be larger to permit us maybe to
discover or to confirm the existence of new metabricks. But we also must be able to obtain more
formal results and evaluate our subjective part when indexing games.
V.E.Ga.S. is accessible on the following address: http://www.bigarobas.com/ludovia/vegas/

References
Alvarez, J., Djaouti, D., Gassempouri, R., Jessel, J. P., & Methel, G. (2006). V.E.Ga.S.: A tool to study morphology of
the video games. Games2006 Portalegre - Portugal.
Alvarez, J., Djaouti, D., Gassempouri, R., Jessel, J. P., & Methel, G. (2006). Morphological study of the video games,
CGIE2006 Perth - Autralia, (To appear Dec. 2006).
Propp, V. (1928). Morphologie du conte. Seuil, (1970) (Translated from French in this paper).
Salen, K., & Zimmerman, E. (2004). Rules of Play. The MIT Press.
Winnicott, D. W. (1971). Jeu et réalité. Folio essais, (2002).
3rd International Conference on Enactive Interfaces (Enactive /06) 101
Perceptive strategies with an enactive interface

Yusr Amamou & John Stewart


COSTECH, Technological University of Compiègne, France
yusr.amamou@utc.fr

This study reveals that when human subjects use an "enactive interface" to perceive graphic
forms, they employ a variety of perceptive strategies. The aim of the present paper is to introduce
the question of the significance of the enaction concept in perceptive strategies of graphic forms.

Introduction
Today the "enactive" theory of perception is coming to the forefront of research in cognitive
science. The paradigm of enaction (Varela, 1998) underlines the contribution of the action in
perception. Perception comprises an active organization (Piaget, 1970), it results from a sensori-
motor coupling, an exchange between an organism and its environment. According to the enactive
conception, perception is intrinsically active. Perception is thus a form of action.
The sensory experiment of a subject depends on the sensory consequences of their own
movements or "sensori-motor contingencies" (O'Regan and Noë 2001). Our work with the
minimalist coupling device "Tactos", which will be described in more detail below, lies within the
confines of the paradigm of the enaction. This study aims at explaining how the perception
prothetized via a coupling device is constituted. The "Tactos" device offers a new possibility of
action and sensation; it modifies the natural sensorimotor loop and builds an artificial coupling, a
prothetized perception. It transforms the stimuli usually attached to the ocular modality into stimuli
attached to the tactile modality. Endowed with a device of sensorimotor coupling, one gives oneself
the means of empirically studying the transformed perception and actions, as well as the sensory
events which progressively involve structured and stabilized movements (the perceptive strategies).
Thus the sensory entries12are deliberately reduced to only one bit of information (sensory return or
not at every moment). The minimalism is at the same time used to apprehend a simplified
perceptive situation and to offer a device at lower cost. Moreover, the minimalism produces an
externalisation of the actions which are easily observable and which one can keep the traces for
analysis. In a perceptive experiment, as our device, the subject is determined by what he does and
what he is able to do.

Experimental context
Actions of a human subject consist in moving the stylus on a graphic tablet (figure 1). These
movements determine the position of the cursor on the monitor. The subject tries to establish
contact with the form. When the stylus crosses the virtual form, the computer treats the signals and
transmits information to the stimulators. The picots are activated. The subject has a finger placed on
the tactile stimulators, thus he receives a tactile feedback (stimulation, or not, is the result of the
subject’s own actions). The tactile stimuli are deliberately reduced to a strict minimum, when the
subject crosses the form the sixteen picots are raised; otherwise the sixteen picots are inactive and
he does not perceive any stimulation. The subject can perceive the form only by moving the stylus,
and while following its contour. The movements controlled by the stylus are presented in the form
of a trajectory.

Discussion
The novice subjects used "Tactos" device for first time. An analysis of exploratory trajectories
suggested that the twelve subjects adopted one of three strategies of form perception (figure 2,
Amamou and Stewart 2006). These are: "continuous tracking" (The subject tries to maintain the

1
Lenay (1997) used a substitution device simplified to only one bit of information. He illustrated the fact that there is
no perception without action. He showed as well as the perception of graphic forms, which is possible with this device,
cannot come from the only sensory entries.
102
contact as long as possible, he advances and tries to stay continually on the form, 23% of subjects
employ this strategy.), "Nano-sweep" (The subject oscillates inside the contact zone, 43% of
subjects employ this strategy) and "Micro-sweep" (the subjects deliberately implement oscillatory
movements while sweeping around the form, 34% of subjects employ this strategy). The interaction
of the subject with the (virtual) form is mediated by the Tactos device. The subject uses the sensory
feedback to guide subsequent actions; these actions determine subsequent sensory feedback;
together, this constitutes the dynamics of a sensory-motor loop.

Figure 1. Experimental device: Tactos. Figure 2. Trajectories of do2 3as a function of


time.

Conclusion
The purpose of this search for strategies is to assign the way in which sensory returns must be
concretely used, the finality for which these strategies are conceived. In order to perceive the
graphic form, each subject adopted one of these strategies identified by analysis. Moreover, it is a
question of establishing properties of use of the device of minimal coupling. The task to identify the
graphic forms had for objective: i) to carry out precise movements to learn how to interact with this
device and ii) to answer rules laid down by the subjects. The strategies employed correspond to
coordinated movements. Once identified the conditions of the effectiveness of a strategy, its
facilitation could be suggested. This assistance and training with the perceptivo-cognitive activity
will be in their turn subjected to new tests. The prothetized perception starts at the time that the
subject adopts a strategy of action which he employs to identify the graphic form.
The identification of graphic forms consists in wondering how is operated the graphs
exploration in the absence of vision and here the subject starts to inspire the strategies. This analysis
clarifies the perceptive strategies used by the subjects and gives an idea of the requirements in
subject’s information.Now our question is: can one establish a link between strategy and
performances? How do these strategies evolve if the graphic forms become more complex?

References
Amamou, Y., & Stewart, J. (2006). Analyse descriptive de trajectoires perceptives. In Actes (ACM Press) 18th French-
speaking conference on Human Computer Interaction (IHM'06), Montréal, Canada, April 18-21.
Lenay, C. (1997). Mouvement et perception : médiation technique et constitution de la spatialisation. Communication à
l'école d'été de l'Association pour la Recherche Cognitive sur le mouvement, 69-80.
O’Regan, K. J., & Noë, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and Brain
Sciences, 24, 5-115.
Piaget, J. (1970). Psychologie et épistémologie génétique. Pour une théorie de la connaissance. Paris: Editions Denoël,
pp 191.
Varela, F. (1998). Le cerveau n’est pas un ordinateur. La recherche, 308, 109-112.

2
do is the distance of each point orthogonal to the form.
3rd International Conference on Enactive Interfaces (Enactive /06) 103
Dissociating perceived exocentric direction and distance in virtual space

José Aznar-Casanova1, Elton Matsushima2, José Da Silva3 & Nilton Ribeiro-Filho4


1
Department of Basic Psychology, Faculty of Psychology, University of Barcelona, Spain
2
Departamento de Psicologia, Universidade Federal Fluminense, Brasil
3
Departamento de Psicologia e Educação, Universidade de São Paulo at Ribeirão Preto, Brasil
4
Instituto de Psicologia, Universidade Federal do Rio de Janeiro, Brasil
jaznar2@ub.edu

A common result of the investigations on the metric of visual space was perceptual
differences in accuracy for egocentric and exocentric distance judgments (for a review: Sedgwick,
1986). Another common finding was that visual angle subtended between two objects (from now
on, exocentric direction) is a robust predictor of the estimated distances (Levin & Haber, 1993;
Matsushima et al., 2005). Their results indicated that the greater the exocentric direction, the larger
the overshooting of the correspondent exocentric distance. Exocentric distance and direction could
be considered as complementary elements that altogether compose the perceptual representation of
the environment. Thus to fully understand how organized this exocentric frame of reference is, one
must determine how exocentric distance and direction are related: whether they operate
independently or, on the contrary, they are tightly related, constraining one another in their
perceptual properties. Some evidences for interdependency between exocentric distance and
direction showed that the anisotropies in visual space were a consequence of influences of
exocentric direction of spatial intervals in distance perception (Levin & Haber, 1993; Loomis &
Philbeck, 1999; Matsushima et al., 2005).

Method
Virtual 3D environment was generated from a Pentium IV 3GB with Wildcat VP880 graphic
card, OpenGL software, connected to LCD stereo shutter goggles and 22” CRT monitor (1024x768
pixels res.) and it was composed by pairs of stimuli (red cylinders) against a black background,
placed at one of the combinations of 9 exocentric distances (34-50cm, in 2cm steps) and 9
exocentric directions (25°-60°, in 5° steps). Standard stimuli had always of 42cm length and 45°
exocentric direction. The exocentric direction (slant in depth dimension) was generated by crossed
and uncrossed binocular disparities (near and far spaces, respectively). Observers sat 1-m far from
screen, accomplished 20 practice trials before the first set of 243 trials (3 series of 81 combinations
of exocentric distances and directions) in one of the stereoviewing conditions (crossed or
uncrossed), and on another day, they accomplished another 20 practice trials before the last set of
243 trials under the other stereo viewing condition. Observers accomplished two-alternative forced
choice tasks, indicating which one was larger, standard or comparison stimuli, comparing their
exocentric distances, in Exp.1, and their exocentric directions, in Exp. 2. Stimuli for both
experiments combined different exocentric distances and directions.
The dissociation was extracted by regression fits of data for uncrossed disparities and for
crossed disparities of specific sets of combinations. To dissociate exocentric distance and direction
influences, analyses either focused on performance for exocentric distances collapsing across
exocentric directions or for exocentric directions collapsing across exocentric distances.

Results and Discussion


In Exp. 1, we predicted that data for exocentric distance judgments with different exocentric
directions would fit to a linear function. If its slope was equal to zero, there will be no influence of
exocentric direction on exocentric distance processing. On the other hand, if slope was different
from zero, exocentric direction of the interval affects judgments of its exocentric distance. The same
stands for Exp.2, we predicted that data for exocentric direction with different exocentric distance
would also fit to a linear function, with no influence of exocentric distance on exocentric direction
for slope equal zero, and significant influences, for slopes different from zero. In Exp. 1, for
104
uncrossed disparities, the data fit to a linear function, with slope = -.001 (SE=.001) which did not
differed from zero (p=.225) and for crossed disparities, the data also fit to a linear function, slope -
.001 (SE=.001), which, again, did not differed from zero (p=.182). On the other hand, for Exp. 2,
for uncrossed disparities, data presented a good fit to a linear function, slope = -.025 (SE=.005),
which was reliably different from zero (F=132.900, p=.0001). For crossed disparities, data fit to a
linear function, slope = -.016. (SE=.0001), which reliably differed from zero (F=329.100, p=.0001).
Our data showed that, at least for stereoscopically generated 3D scenes, exocentric distance
judgments are not influenced by exocentric directions, however, exocentric direction judgments are
significantly biased as a function of exocentric distance. This finding suggests that there is
dissociation between the mechanisms specialized in distance processing and specialized in direction
processing in an exocentric frame of reference. This asymmetry suggests that there is a hierarchical
processing, with exocentric distance processed first, for, after that, exocentric direction to be
processed or, more probably, that a necessary condition to process exocentric direction between two
stimuli is to know their exocentric distance.
It has been proposed (Foley, 1980; Heller, 2004) that the mechanism for processing exocentric
distance is horizontal binocular disparity cue, which require basically horizontal disparities (∇h).
The processing of exocentric direction follows different rules. According to the rules for projective
transformation, a spatial interval subtends a visual angle α. The perception of its slant depends on
egocentric distance, which affects the visual angle, producing a scaling of the size of interval, and
depends on the eccentricity (e), which decreases the slant in proportion to cos2(e). Additionally, its
visual angle in a frontoparallel plane diminishes in a ratio about 1/d (d is egocentric distance),
whilst the same interval receding in depth plane diminishes in a ratio about 1/d2. Consequently,
when one observes an interval supposedly with fixed length, the changes in size and eccentricity of
the visual angles are converted into changes in exocentric orientation. This may be one reason for
dependency on distance for exocentric direction processing. A second one is a consequence of the
fact that it is also possible that the ratio of vertical and horizontal disparities, ∇v/∇h, may be used to
predict slant (Rogers & Bradshaw, 1994). If this statement holds, exocentric direction is a geometrical
property subsequent to exocentric distance processing, in a similar way to the relationship between
egocentric and exocentric distances.

References
Foley, J. M. (1980). Binocular distance perception. Psychological Review, 87, 411-434.
Heller, J. (2004). The locus of perceived equidistance (LPED) in binocular vision. Perception & Psychophysics, 66(7),
1162-1170.
Levin, C. A., & Haber, R. N. (1993). Visual angle as a determinant of perceived interobject distance. Perception &
Psychophysics, 54(2), 250-259.
Loomis, J. M., & Philbeck, J. W. (1999). Is the anisotropy of perceived 3-D shape invariant across scale? Perception &
Psychophysics, 61, 397-402.
Matsushima, E. H., de Oliveira, A. P., Ribeiro-Filho, N. P., & Da Silva, J. A. (2005). Visual angle as determinant factor
for relative distance perception. Psicológica, 26, 97-104.
Rogers, B. J., & Bradshaw, M. F. (1994). Is dif-frequency a stimulus for stereoscopic slant? Investigative
Ophthalmology and Visual Science, 35 (Abstracts), 1316.
3rd International Conference on Enactive Interfaces (Enactive /06) 105
Spatio-temporal coordination during reach-to-grasp movement

Patrice Bendahan & Philippe Gorce


LESP EA 31 62, University of Toulon – Var, France
bendahan@univ-tln.fr

Reach – to – grasp movement is one of most studied human mechanisms in the biomechanics
and neurophysiology domain. Generally, these works aim to obtain a better understanding of how
the central nervous system coordinates multijoint movements. In this context, two simultaneous
motor actions have been identified: the arm transports the hand near the object and the hand
preshapes the grip aperture in order to grasp the target object. Following this description of the
prehensile action, two controlled components emerge: the transport and grip phases. Several studies
have shown the interdependence between both of them: some authors advanced a temporal
coordination (Hoff & Arbib, 1993) whereas others postulated a spatial coupling (Haggard & Wing,
1995). More recently, a limited number of studies have shown that the closure distance plays an
important role in coordinating prehensile act. This parameter underlines the existence of a spatial
regularity between the grasp and transport components through a state – space control (Haggard &
Wing, 1995 ; Saling et al., 1998 ; Wang & Stelmach, 2001). Nevertheless, this hypothesis was not
sufficiently tested in the case of obstacles avoidance (Saling et al., 1998) because the most of
authors are interested by non constraint movements. Therefore, the present study varied extrinsic
and intrinsic obstacle properties and object size to test the hypothesis that aperture-closure distance
is a stable variable controlled by the nervous system and to underline the spatio – temporal
coordination during reach – to – grasp movement.

Materials and methods


Seventeen healthy students (twelve males and five females; age 20-26 years); all right handed)
were recruited from the University of South Toulon - Var. All have been chosen thanks to a
questionary wich proved that none of them had a previous history of neurological or visual
problems and all were right handed. Subjects were seated in a chair facing a table and were
instructed to reach and grasp two types of prototypic object (block and sphere) of two different sizes
(4 and 8 cm) placed behind an obstacle of varying position and size. The objects were placed at 50
cm from the wrist initial position Po in the anterior-posterior direction. Three obstacles of different
sizes (10*35*10, 10*35*15, 10*35*20 cm) were placed randomly at 25 cm (P2 position) or 40 cm
(P1 position) from Po. Each subject executed all the randomized tasks. Movements were recorded
using the VICONTM system. Data were sampled at 120 Hz for 2 s, filtered through a second-order
Butterworth digital filter with a cut-off frequency of 10 Hz. Collected data were used to obtain
kinematics measures for the grasp (time to peak aperture and closure distance), wrist (transport
distance, transport time, time to peak velocity and deceleration time). For each dependent measure,
a mean value was calculated for each subject and condition and tested through an ANOVA.

Results
Kinematics parameters of the transport component were influenced by the presence of the
obstacles. Mean values of these parameters are presented in Table 1. The mean transport distance
[F2,16 = 53.828, p < 0.000001] and mean transport time [F2,16 = 30.5, p = 0.00001] increase with the
rise of obstacle height. Moreover, time to peak velocity occurred significantly later for the three
obstacle conditions [F2,16 = 4.166, p = 0.018]. When subjects reached over one of the obstacles,
their deceleration time was significantly longer [F2,16 = 22.61, p < 0.0001]. Obstacle position is
significant for transport distance [F1,8 = 10.4, p = 0.012] and time to peak velocity [F1,8 = 10.65, p =
0.011]. Object size is significant for transport time [F1,8 = 19.80, p = 0.003] and deceleration time
[F1,8 = 16.41, p = 0.004].
Relative time to peak aperture increases with the rising of obstacle height [F2,16 = 26.658, p <
0.0001]. Obstacle position and object size have no effect on this parameter. Moreover, it appeared
106
that the closure distance was quite invariant (Obstacle height [F2,16 = 0.04, p = 0.842], obstacle
position [F1,8 = 2.12, p = 0.180]).

Table 1. Transport and grip component parameters (aObstacle height significant, bObstacle
position significant, cObject size significant).

Obstacle Size Obstacle Position Object size


Transport &grasp component
H1 H2 H3 P2 P1 PO GO
58.96 69.06 79.94 71.06 62.08 62.09 63.09
Transport distance a, b(cm)
(6.9) (7.3) (7.4) (9.2) (10.5) (14.1) (14.3)
1115.5 1171.4 1299.0 1180.2 1167.85 1182.7 1095.3
Transport time a, c (ms)
(186) (165) (192) (177) (210) (202) (194)
417.7 421 441.1 460.2 378 426.6 401.5
Time to peak velocity (ms)a,b
(117) (140) (123) (129) (109) (120) (121)
697.8 750.3 857.3 719.9 789.8 756.1 693.7
Deceleration time (ms)a, c
(164) (151° (192° (162) (185) (181) (170)
779.3 850.4 1007 862.4 857.9 818.5 830.5
Time to peak aperture (ms),a
(132) (129) (170) (163) (173) (153) (200)
19.64 19.9 20.03 20.37 19.07 21.63 18.05
Closure distance (cm)
(5) (5.1) (4.68) (4.6) (5.35) (4.85) (3.1)

Discussion
Our experiment examined the relationship between reach and grasp components when the
reach was vertically elevated to avoid an obstacle placed in the hand normal travel path and when
the grasp was influence by the object size. The alteration of the path of reach resulted in a longer
transport distance and duration. These characteristics implicate an increase of the time to peak
velocity and peak deceleration. These modifications suggest that the motor system needs a longer
time for the final part of the hand transport when an obstacle is present. As observed, the increased
travel distance, which prolonged the transport component, modified all the kinematics features of
the grip. Indeed, the longer time to peak aperture indicates that grip characteristics evolve from the
demands placed on the transport component. This phenomenon suggests a continuous temporal
adjustment. Moreover, our results confirmed the previous findings (Saling et al., 1998 ; Wang &
Stelmach, 2001), demonstrating that the aperture – opening distance increased systematically as the
total hand – transport distance increased contrary to the aperture – closure distance. This last may
be a stable variable controlled by the nervous system for prehensile action and these results suggest
that spatial coordination is a dominant feature of prehension (Haggard & Wing, 1995). In
conclusion, our findings are in agreement with the previous studies supporting the hypothesis that
prehensile movements implicate spatio – temporal coordination.

References
Haggard, P., & Wing, A. (1995). Coordinated responses following mechanical perturbation of the arm during
prehension. Experimental Brain Research, 102, 483-494.
Hoff, B., & Arbib, M. A. (1993). Models of trajectory formation and temporal interaction of reach and grasp. Journal of
Motor Behaviour, 25, 175-192.
Saling, M., Alberts, J., Stelmach, G. E., & Bloedel, J. R. (1998). Reach-to-grasp movements during obstacle avoidance.
Experimental Brain Research, 118, 251-258.
Wang, J., & Stelmach, G. E. (2001). Spatial and temporal control of trunk assisted prehensile actions. Experimental
Brain Research, 136, 231–240.
3rd International Conference on Enactive Interfaces (Enactive /06) 107
Discrimination of tactile rendering on virtual surfaces
Alan C. Brady1, Ian R. Summers1, Jianguo Qu1& Charlotte Magnusson2
1
Biomedical Physics Group, School of Physics, University of Exeter, UK
2
Certec, Department of Design Sciences, Lund University, Sweden
Alan.C.Brady@exeter.ac.uk

Introduction
An array of vibrating contactors on the fingertip can provide information about virtual objects
during active exploration of a virtual environment – for example, information about contact area,
edges, corners and surface texture. For the operation of such a device, it is necessary to generate in
real time an individually specified drive waveform for each contactor of the stimulator array. To
reduce the complexity of the problem, a system has been developed at Exeter in which each drive
signal is specified as a mixture of a small number of sinewaves.
The aim of this study is to investigate the discrimination of the virtual textures which can be
produced in this way, with the aim of developing a library of textures for use in virtual reality
applications. Textures produced by mixtures of sinewaves are differentiated in terms of the mean
amplitude and the spatial distribution at each sinewave frequency. Experimental questions include
the number of categories (i.e., frequency/amplitude/spatial combinations) that can be distinguished
and the possible correspondence between virtual and real textures.

Apparatus
The tactile stimulator used for this study (Figure 1) is a device developed for the HAPTEX
project on virtual textiles (Allerkamp et al., 2006), and is an evolution of an earlier system
described by Summers and Chanter (2002) and Summers et al. (2001). Vibrotactile stimulation is
provided by a 6 × 4 array of contactors with 2 mm pitch. Each contactor is driven by a piezoelectric
bimorph. Each bimorph is under independent computer control, with a drive waveform specified as
a superposition of up to eight sinewaves. (Only two sinewave components are used in the present
study: 40 Hz and 320 Hz.) The amplitudes of these sinewaves are obtained from a software map – a
virtual tactile “landscape” – written at a resolution of 1 mm. Data on absolute position within the
2D workspace are provided by an A4 graphics tablet (Wacom Intuos 3) and passed to the stimulus
generation software at an update rate of 40 Hz. The A4 workspace is mapped to a window on a
monitor, as shown in the inset to Figure 1. The visual representation is at 1:1 scale with the tactile
workspace.

Figure 1. The tactile stimulator. The 24 contactors are central on the upper surface. In the inset
screen shot, the tactile cursor which represents the stimulated area is visible between two of the
square targets.

Experimental protocol
The 2D workspace is available for free, active exploration by the test subject. It contains three
square targets, each 40 mm × 40 mm. When the tactile cursor lies outside the target areas, no
vibratory stimulation is delivered to the fingertip. When the cursor lies within a target, the fingertip
108
is presented with a texture, specified by an amplitude map for 40 Hz stimulation and an amplitude
map for 320 Hz stimulation. Two of the targets have the same texture and the remaining target has a
different texture. The experimental protocol is an “odd-one-out-from three” task, intended to
measure the subject’s ability to discriminate the two textures.

Figure 2. Scores (% correct) for discrimination of the textures within the three square targets.
For each of the three settings of overall stimulus intensity, data are shown for the four
amplitude contrasts and the two spatial contrasts.

At the time of writing, experimental work is under way for the main part of this study. Limited
data are presently available from a pilot study. In the pilot study, stimulation was at 40 Hz only, and
the target specifications were:
(a) uniform amplitude distribution within each target, contrast in overall amplitude of
2 dB or 4 dB; odd-one-out at – 4 dB, – 2 dB, + 2 dB or + 4 dB;
(b) same mean stimulation level within each target, contrast between uniform amplitude
distribution and non-uniform distribution (Gaussian, range ± 2 dB or ± 4 dB).
Nine subjects (five female, four male) attempted the discrimination task, which was repeated at
three different settings of overall stimulus intensity, differing by 4 dB steps.

Results and Discussion


Mean scores (percent correct identification) over the nine subjects are shown in Figure 2. Data
for the amplitude contrasts show scores close to 100% for the 4 dB contrast and scores well above
chance (33%) for the 2 dB contrast. For the spatial contrasts, inexperienced subjects mostly scored
at chance but high scores were obtained by more experienced subjects (i.e., those with significant
previous experience of tactile-perception experiments). The experienced subjects made more use of
exploratory motion, and reported that the odd-one-out (always non-uniform) felt “rougher” than the
others. In the absence of exploratory motion, a target with small amplitude variations over its
surface appears to be indistinguishable from a target at the same mean level with uniform
amplitude.

References
Allerkamp, D., Böttcher, G., Wolter, F.-E., Brady, A. C., Qu, J., & Summers, I. R. (2006). A vibrotactile approach to
tactile rendering, The Visual Computer (in press).
Summers, I. R., & Chanter, C. M. (2002). A broadband tactile array on the fingertip. Journal of the Acoustical Society
of America, 112, 2118-2126.
Summers, I. R., Chanter, C. M., Southall, A. L., & Brady, A. C. (2001). Results from a Tactile Array on the Fingertip.
Proceedings Eurohaptics 2001, Birmingham, 26-28.

Acknowledgements
Supported by the FET/IST initiative of the EU (ENACTIVE NoE and HAPTEX project).
3rd International Conference on Enactive Interfaces (Enactive /06) 109
Accelerating object-command transitions with pie menus
Jérôme Cance, Maxime Collomb & Mountaz Hascoët
LIRMM, CNRS UMR 5506, University of Montpellier 2, France
jerome.cance@gmail.com

In most direct manipulation human computer interfaces, users spend a lot of time in
transitions. Transitions are of various kinds. Moving the pointer from one window application to the
taskbar or to another window application can be considered as a transition. Because of the very
nature of many GUI components (such as toolbars, pull-down menus or else) many of such
transitions cannot be avoided without reconsidering the interaction styles.
In the work reported in this paper we are interested with very generic transitions that we call
object-command transition. An object-command transition happens when a user first selects an
object then applies a command on the object then returns to the object to resume the ongoing task.
Concrete examples of such transitions are numerous. For example, such transitions occur when a
user selects a piece of text in a text editor then change the font by acting on an icon in a toolbar and
further returns to the piece of text to resume editing.
Our purpose in this work is to provide ways of accelerating such transitions.

Figure 1. Pie menus: regular interaction style (left), reversed style (right). The red line is the
way from the activation of the menu towards the command. The dotted line identifies the
distance to go through to get back to the original item. In the case of reversed style, this
distance is 0.

Context: visualisation application and pie menus


The visualisation application we are using for the study is an application that displays a large
amount of visual items representing various types of information on a wide surface. Users can
perform actions on the items displayed either directly (by dragging the items around for example) or
indirectly by using pie menus.
Pie menus (Callahan, Hopkins, Weiser & Shneiderman, 1988) (http://www.piemenus.com/)
have long been considered as a very good alternative to regular pull down menus. A pie menu is a
circular popup menu. Commands are displayed on circles and submenus can be added as depicted
in Figure 1. Command selection depends on direction. So that experienced user can make quick
gesture to get very rapidly to commands. However, when it comes to the object-command
transition, they still have limitations, especially when they contain submenus (like in Figure 1).
Indeed, when a user invokes the menu and has to browse through the submenus, he can be relatively
far from his original location when he reaches the command and the transition back can be
relatively costly.

Enhanced pie-menus
In order to reduce the time spent in object-command transitions with pie-menu, we came up
110
with two new interaction styles for pie menus. In both cases, mouse moves and pointer moves
become unrelated when the pie menu is invoked. Indeed, as soon as pie menu is active, mouse
moves result in pie menu moves while the pointer remains fixed. The impression for the user is that
the pie menu slips under the mouse pointer when he moves the mouse. Consequently, in both styles,
when the user reaches the desired command in the pie menu, the pointer is still at the location where
it was when he invoked the pie menu. Both interaction styles present the advantage of reducing
object-command transitions to the minimum: i.e. the traversal of the pie menu to the command.
The two interaction styles differ in the way the pie menu slips under the pointer. In the first
interaction style, called fixed style, the menu is controlled by the mouse in the same way as a
toolglass (Bier, Stone, Pier, Fishkin, Baudel, Conway, Buxton & DeRose, 1994) would be. In other
words, if the mouse goes right, the pie menu moves right, etc. In the second interaction style, called
reversed style, the mouse moves and pie menu moves are reversed. Consequently, when the user
moves to his left, the pie menu moves right. The motivation for this design is that user moves are
oriented toward the command to reach.

Experiment
We conducted an experiment with 12 subjects, 9 males and 3 females. 8 subject aged between
20 and 30, 2 between 30 and 40 and 2 between 40 and 50. All had good practice of computer
environments but none of pie menus. The task studied consisted in a right click on a specific item
displayed on the surface to popup the pie menu then a traversal through the pie menu to select a
given command and finally a return to the original item. In order to limit the noise coming from
different command labels, all command labels where numbers.
The experiment design is a within subject design. Each subject performed one block of 10
trials repeating the task for each modality of the main independant variable (regular style - fixed
style - reversed style). A different command was given for each trial and the target item was also
moved at each trial to avoid learning effects. The order in which modalities were treated was
controlled by a Latin square pattern. After the experiment, each user was asked few questions about
his preferences and impressions. The experiment is available as a Java application and can be
downloaded on the web (http://edel.lirmm.fr/PieMenu).

Results and Discussion


The 6 first trials of each block were considered as the training trials and the 4 last of each
block were considered as the trained trials. Results suggest that for the training trials users perform
the fastest with the normal interaction style, then comes the reversed and finally the fixed (average
times to complete the task are respectively 306ms, 372 ms and 459 ms)
On the contrary, for the trained trials, users performs the fastest with reversed style. Regular
comes after followed by fixed (averages respectively 271 ms, 280 ms and 339 ms).
User preferences indicated that 10 out of 12 preferred the regular style, 1 the fixed and 1 the
reversed. However, five of them declared they might prefer the reversed if they had been a little
trained.

Conclusion
This preliminary study aimed at providing cues. It suggests that with very little experienced
users might perform better on the reversed pie menu than with normal ones. This would indicate
that our interaction style for pie menu can indeed reduce the time spent in transitions. A more
complete study with more experimented users will be conducted to further determine whether this
tendency is confirmed.

References
Bier, E. A., Stone, M. C., Pier, K., Fishkin, K., Baudel, T., Conway, M., Buxton, W., & DeRose, T. (1994). Toolglass
and magic lenses: the see-through interface, Conference companion on Human factors in computing systems,
p.445-446, April 24-28, Boston, Massachusetts, United States.
Callahan J., Hopkins, D., Weiser M., & Shneiderman, B. (1988). An empirical comparison of pie vs. linear menus.
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 95-100.
3rd International Conference on Enactive Interfaces (Enactive /06) 111
Color Visual Code: An augmented reality interface for mobile phone14

Da Cao, Xubo Yang & Shuangjiu Xiao


Department of Computer Science, Shanghai Jiao Tong University, China
yangxubo@sjtu.edu.cn

Introduction
In this paper, we introduce a new marker system, color visual code, to function as an
augmented reality (AR) interface for mobile phone. Our goal is to encode as much information as
possible into the marker, thus can represent or link to more digital content for augmented reality.
This way, the marker can serve as a key to query a database, or as a physical carrier of substantial
information, which enables us to use our mobile phone as a new media.
Related work
Our work is partially inspired by Rohs (2004). Their work used a black-and-white visual code
to describe an unique ID which is recognizable via a mobile phone camera. They calculated the
rotation angle and the amount of tilting of the phone camera related to the code marker as additional
input parameters for mobile phone. They used the visual code as a 2D interface for menu-selection
and interaction with large-scale display. The capacity of their visual code is 83 bit. In our design,
we use a similar frame of visual code and extend it in two ways. Firstly, we increased the
information capacity via color bits. Secondly, we further calculated the 3D-3D transformation
matrix between the color visual code and the phone camera, and deployed our code as an interface
for three-dimensional augmented reality.
Billinghurst and Kato (1999) developed an augmented reality library called ARToolkit. It can
be used to develop basic augmented reality functions, including marker designing, marker
recognizing and rendering of 3D object using OpenGL. It is the most popular AR library up to now.
However, it was not originally designed for mobile phone interaction. The lack of an uniform
coding scheme for its marker leads to slow computation for marker recognition and inconvenient
process for associating marker and the corresponding digital content. Moreover, it doesn’t take full
advantage of the marker space to encode more information.
Dell’Acqua et al (2005) designed Colored Visual Tags [3]. It used tricolor (red, green, blue) as
its basic colors. Red color was used as the frame of the marker to recognize the region of candidate
markers.The marker was divided into square grid with same size, which was then further divided
into two triangles. Each triangle has color green or blue. Because each unit can only choose one
from two colors, it is essentially a binary marker and doesn’t offer a larger information capacity
than black and white markers.
Color Visual Code
We design our Color Visual Code (CVC) as a grid of square unit, each unit has its size as
small as possible. We also provide a uniform coding scheme to make marker recognition algorithm
faster. As shown in Figure 1, CVC is composed of four parts: guide bars, corner stones, benchmark
bar and coding area. There are two guide bars with different length. They represent the direction of
CVC. Three corner stones represent three vertices of CVC, which are black units. We place the
benchmark bar horizontally on the top of CVC. There is one unit distance between the benchmark
bar and the left top corner stone. The colors on the benchmark bar represent value 0 to 7
respectively. The wide space in the middle of CVC is used to place the code. For convenience we
use small squares for the coding area.
For each input image, our marker recognition algorithm will find the legal code if existing,
extract the code and then get the position of the four vertices of CVC. There are following steps:
image binarization, connected region detection, CVC locating, computing homography from code

1
Sponsored by National Natural Science Foundation of China (No. 60403044)
112
coordinate to image coordinate and code extraction. Finally, we can obtain a large-capacity coded
information and a 3D-3D transformation matrix for augmented reality usage.

Figure 1. Color Visual Code

Results
We tested our CVC as exactly shown in figure 1. Each marker is a grid of 11 x 11. The coding
area is the internal area of 9 x 7 units. Eight colors are used as in the benchmark. Therefore, this
CVC’s capacity is 9 x 7 x 3 = 189 bits, more than doubled of Rohs (2004) code with the similar
code size. The demo is tested on a Nokia 6600 with 300,000 pixels camera.
The result of augmented reality is shown in figure 2. The image on the left is the original
view. The image on the right is the augmented reality view via the phone. We can see that a photo is
placed on the CVC correctly. When the phone moves, the photo is kept on registered on the marker.
Having larger capacity, we can associate more digital content with the marker.

Figure 2. Augmented Reality with Color Visual Code

Conclusion
We have designed a color visual code and used it as an augmented reality interface for mobile
phone. Our technique is useful for accessing augmented information. For example, we can print the
CVC with an article on newspaper, and then use our mobile phone to see an augmented 3D model,
animation, or video for the article, thus activate our static newspaper. The larger information
capacity made it possible to link the marker to more digital content.
Our design for CVC has good scalability. We can increase the information capacity of the
code easily, basically in two ways: (1) increase the size of grid, i.e. the width and length of the
marker to enlarge the information capacity, which also leads to a larger marker; (2) increase the
number of colors. For example, if we increase the grid size of CVC to 22x22 and the color number
to 16, the information capacity will be (22 – 2) x (22 – 4) x 4 = 1440 bits. Both ways of expansion
require more computation on mobile phones.
References
Billinghurst, M., Kato, H., Kraus, E., & May, R. (1999). Shared Space: Collaborative Augmented Reality. Visual
Proceedings of Special Interest Group on Graphics and Interactive Techniques (SIGGRAPH’99), August 7-12th,
Los Angeles, CA, ACM Press.
Dell’Acqua, A., Ferrari, M., Marcon, M., Sarti, A., & Tubaro, S. (2005). Colored Visual Tags: a robust Approach for
Augmented Reality. Proceedings IEEE International Conference on Advanced Video and Signal based
Surveillance (AVSS2005), Como (Italy), September 15-16 2005, pp. 423-427.
Rohs, M. (2004). Real-world interaction with camera-phones. 2nd International Symposium on Ubiquitous Computing
Systems (UCS 2004).
3rd International Conference on Enactive Interfaces (Enactive /06) 113
Wearable automatic biometric monitoring system
for massive remote assistance

Sylvain Cardin, Achille Peternier, Frédéric Vexo & Daniel Thalmann


Virtual Reality Laboratory (VRLab), Ecole Polytechnique Fédérale de Lausanne (EPFL),
Switzerland
sylvain.cardin @epfl.ch

The time required to alert medical assistance after a fall or loss of consciousness is a critical
factor of survival. The elderly population, who often lives alone and isolated, is the most sensible to
those risks. Researches are going on since several decades to develop smart environment able to
track symptoms (Williams, Doughty, Cameron & Bradley, 1998) of such crisis and to automatically
alert appropriate assistance in case of emergency (Eklund, Sprinkle, Sastry & Hansen, 2005).
Human measurements like body temperature, skin conductivity and physical activity are among the
basic signals used to determine a person health and stress. Unfortunately, the instruments and wires
used to keep track of this information often limit the mobility and comfort of the user. This abstract
presents the development of a very compact monitoring system, based on a set of sensors connected
to a remote server via a Personal Digital Assistant (PDA), using local wireless connections between
elements (Bluetooth) and WiFi to the distant server. This system also features vibrotactile feedback
to alert the user and check his response. We also used WiFi access points and quality of signal to
determine an approximate localization of the users within a known perimeter. Such a system is
intended to be used as a massive, non invasive, biometric instrument to constantly check the health
of a large amount of users over an internet connection.

Embedded system
The embedded system is based on a microcontroller which recovers data from the different
sensors and sends information to the PDA via a serial Bluetooth communication. The sensors used
in this application are analogical ones related to activity measurement: skin conductivity, body
temperature and angular acceleration of the limbs. Measurements are taken through an integrated 10
bits ADC controller of the microcontroller. The system collects and treats the information in order
to send it as a formatted data structure to the handheld device through the Bluetooth connectivity
with a refresh rate around 200 Hz. The embedded system also includes vibrotactile feedback to alert
the user in case of need when an emergency situation is detected. In this case the PDA sends
vibrotactile feedback to render onto the user’s skin. The vibration motors are driven by PWM
generated by the microcontroller. The whole embedded system is low power consumer (below 1 W)
and doesn’t weight more than 10 grams, in order to be easily integrated in the clothing of any users
(see Figure 1).

Server/client architecture
Handheld devices connect over a WiFi network to a remote server. Information gathered by
the sensors is first processed by a microcontroller embedded in the clothes and forwarded via
Bluetooth to the PDA. The PDA performs a first-level analysis of these values and regularly sends
information to the remote server. In case of minor troubles (connection lost, anomalous data
received from sensors or malfunctions), the PDA software informs the user about the problems and
how to solve them. The PDA software is extremely user-friendly and is based on the top of our real-
time 3D platform described in (Peternier, Vexo & Thalmann, 2006) to provide a multimedia
interface with images and animations instead of plain text (difficult to read on the small embedded
display). Server-side, the information is stored in a database and compared with values previously
received in order to determine if sudden changes have occurred or abnormal data has been recorded
(for example when a user’s temperature is very high or low and his/her movement absent). In these
cases, alert signals as well as the user profile and data are displayed on the server screen to inform
the server-side assistant and help him taking a decision. By using the WiFi geo-localization system,
114
an approximate position of the user is also given to guide a first aid team to reach the patient. In
case of emergency, the server assistant can also try to talk with the user by meaning of a voice-over-
IP service (VoIP) and the integrated speaker/microphone embedded in the PDA, or try to wake up
the user by remotely activating the vibrators.

Figure 1. Schematic overview of the system architecture (left), hardware to embed in clothes
(center), the PDA software interface (right)

Conclusion and future applications


Our platform is intended to be used in scenarios where monitoring of a large amount of users
would be concretely difficult and expensive. Such a system could keep track of their health by
offering a partially automatic way to identify potentially problematic cases and requires only a PDA
and some non expensive sensors. The use of an already existing WiFi infrastructure also
considerably reduces the installation and maintenance costs of our solution: WiFi areas are also
more and more widely available and continue to increase in the urban regions (like recently in Paris,
with the installation of a large number of free access-points all around the city). The system can also
be generalized to all kind of application which requires full duplex telemetry. In a future application
such a system will be used in our laboratory to gathers data and send control commands to an
unmanned flying vehicle. The goal in this particular application will be to improve presence of the
tele-operated system by adding multimodal feedback reconstructed from the different sensors on
board.

References
Eklund, J. M., Sprinkle, J., Sastry, S., & Hansen, T. R. (2005). Information Technology for Assisted Living at Home:
building a wireless infrastructure for assisted living. Proceedings of the 27th Annual International Conference of
the IEEE Engineering in Medicine and Biology Society, pp. 3931- 3934.
Peternier, A., Vexo, F., & Thalmann, D. (2006). Wearable Mixed Reality System In Less than 1 Pound. Proceedings of
the 12th Eurographics Symposium on Virtual Environments, Lisbon, Portugal.
Williams, G., Doughty, K., Cameron, K., & Bradley, D. A. (1998). A smart fall and activity monitor for telecare
applications. Proceedings of the 20th Annual International Conference of the IEEE in Medicine and Biology
Society, vol.3, pp.1151-1154.

Acknowledgements
We would like to thank the Enactive network of excellence for their support in our researches,
and also J.C Zufferey and Dario Floreano from the Laboratory of Intelligent System for providing
us the hardware components and the master student J.P. Clivaz for his work.
3rd International Conference on Enactive Interfaces (Enactive /06) 115
Improving drag-and-drop on wall-size displays
Maxime Collomb & Mountaz Hascoët
LIRMM, UMR 5506, CNRS University of Montpellier II, France.
collomb@lirmm.fr

With the emergence of wall-size displays, touch and pen input have regained popularity.
Touch/pen input requires users to physically reach content in order to interact with it. This can
become a problem when targets are out of reach, e.g., because they are located too far or on a
display unit that does not support touch/pen input as explained by Baudisch et al. (2003). Some
interaction techniques have been proposed to simplify drag-and-drop from and to inaccessible
screen locations, across long distances, and across display unit borders.

The approaches
The proposed techniques include pick-and-drop by Rekimoto (1997), push-and-throw by
Hascoët (2003) and drag-and-pop by Baudisch et al. (2003).
Pick-and-drop mechanism is close to traditional drag-and-drop. It does not require users to
maintain contact with the screen. Instead, users make a click to pick an object and another click to
drop it. The pick and drop operations can occur on different displays but have to be made with the
same pen. Push-and-throw and drag-and-pop use opposite approaches.
1. The pointer-to-target approach
The first approach, illustrated by push-and-throw (fig. 1-left), consists in throwing objects to
target instead of moving the pointer all the way to the target. As the main problem with throwing is
precision, the idea behind push-and-throw is to provide adequate feedback and trajectories. The
feedback provides users real-time preview of where the dragged object will come down if thrown,
and trajectories are inspired by the metaphor of the pantograph. Hence, this temporarily turn the
pen/touch input, inherently a direct pointing device, into an indirect pointing device in order to
shorten distances faster as well as to make it possible to reach locations further away or on different
screen units.

Figure 1. (L to R): push-and-throw, drag-and-pop and push-and-pop walkthrough.

2. The target-to-pointer approach


Drag-and-pop (fig. 1-centre) uses the opposite approach to push-and-throw. Rather than
sending the dragged object to the periphery, it allows users to bring a selection of likely candidates
to the user (a “tip icon” is created for each candidate). This allows users to complete drag
interactions in a convenient screen location.
3. Comparison of approaches
There are two major differences between these approaches. The first one is the need of
reorientation. Indeed, using push-and-throw, users are focused on the target space and have to
constantly monitor the screen to adjust their movement. On the other hand, drag-and-pop requires
users to reorient themselves only once. Rubber bands are used to limit that impact and once users
have identified the target tip icon, they can complete the interaction easily.
The second difference is the possibilities offered by each approach. The target-to-pointer
approach assumes that the movement has a target which is the case when dragging an icon to the
recycle bin for example. This is not the case when rearranging icons on the desktop.
116
The best of both approaches
Based on our analysis of push-and-throw and drag-and-pop, we created a new technique
designed to combine the strengths of both techniques. We call this new technique push-and-pop, see
Collomb et al. (2005). Figure 1-right shows a walkthrough in which the user is dragging a word
document into the recycle bin. The idea behind push-and-throw is to use the world in miniature
environment from push-and-throw while keeping the full size tip icons of drag-and-pop, allowing
users keep focus on the source area.
In case users need to rearrange icons on the desktop, they can switch push-and-pop
temporarily into a push-and-throw mode. Users invoke this functionality by moving the pointer
back to the location of invocation. Push-and-throw has been improved with the introduction of a
non-linear acceleration which addresses the lack of precision of push-and-throw and allows a one
pixel pointing precision.
Studies
We made 2 experiments to compare movement times and error rates for six techniques (fig.
2): drag-and-drop, pick-and-drop, push-and-throw, drag-and-pop, push-and-pop and acc. push-and-
throw. Both studies had similar results. Push-and-pop performed best. It was just a little better than
drag-and-pop for times but much better for errors. Then comes acc. push-and-throw, pick-and-drop
and push-and-throw. Classic drag-and-drop performed well for short distances but performed poorly
when the task required user to cross bezel between screens.
12
drag & drop
11 push & throw drag&drop push & pop
10 pick & drop
9 acc. push & throw push&throw acc.push&th
8 drag & pop
temp. dropped
7 push & pop pick&drop
Time (s)

6 wrong target drag & pop


5 acc.push&throw
4
push&throw
3 drag&pop
2
pick & drop
1 push&pop
0
198 455 480 504 920 1075 1123 1173 1651 1695 1850 1897
0 5 10 15 drag & drop
Distance (pixels)

Figure 2. Results of the first study. (L to R): movement times, error rates, user preferences.
Conclusion
Confirming findings by Baudisch et al. (2003) drag-and-drop performed well as long as
source and target icons were situated in the same display unit, but failed quickly when long
distances and bezels were involved. In addition, we found that pick-and-drop is affected by distance
in a similar way, though to a lesser extent. This is coherent with the fitt’s law.
For all other evaluated techniques, target distance had comparably little impact on task
performances. However, our studies seem to indicate a performance benefit of acquisition
techniques that require a one-time reorientation (drag-and-pop and push-and-pop) over techniques
that require continuous tracking.
Overall, the study indicates that push-and-pop is indeed a useful technique. Push-and-pop
outperformed all other techniques, including its ancestors, drag-and-pop and push-and-throw.
Participants’ subjective preference reflected this. Push-and-pop also offered a very low error rate.
Among pointer-to-target techniques, accelerated push-and-throw performed significantly better than
traditional push-and-throw. Consequently, the combination of push-and-pop and accelerated push-
and-throw appears as the most efficient technique in terms of accuracy, speed and reachability.
References
Baudisch, P., Cutrell, E., Robbins, D., Czerwinski, M., Tandler, P., Bederson, B., & Zierlinger, A. (2003). Drag-and-
Pop and Drag-and-Pick: Techniques for Accessing Remote Screen Content on Touch and Pen-operated Systems.
Interact’03, pp. 57–64.
Collomb, M., Hascoet, M., Baudisch, P., & Lee, B. (2005). Improving drag-and-drop on wall-size displays. Graphics
Interface’05, pp. 25–32.
Hascoët, M. (2003). Throwing models for large displays. Proceedings of 11th International Conference on Human
Computer Interaction (HCI'03), pp. 73–77.
Rekimoto, J. (1997). Pick-and-Drop: A direct manipulation technique for multiple computer environments. Symposium
on User Interface Software and Technology (UIST'97), pp. 31–39.
3rd International Conference on Enactive Interfaces (Enactive /06) 117
Measurements on Pacinian corpuscles in the fingertip
Natalie L. Cornes, Rebecca Sulley, Alan C. Brady & Ian R. Summers
Biomedical Physics Group, School of Physics, University of Exeter, UK
i.r.summers@exeter.ac.uk

Introduction
A detailed understanding of tactile perception is important to the investigation of multimodal
human interaction with real or virtual environments. In this study the distribution of Pacinian
corpuscles in the human index fingertip was investigated using high-resolution magnetic-resonance
imaging (MRI). Further experiments measured the variation over the index fingertip of the
vibrotactile threshold at 250 Hz. The hypothesis was that the threshold would be lower in regions
where the density of Pacinian corpuscles is higher.
There have been few previous studies of the number and distribution of Pacinian corpuscles in
the fingers. In a cadaver study on elderly specimens (69-89 years old), Stark et al. (1998) observed
10-20 corpuscles in each fingertip, generally clustered round the digital nerves. It is probable
(Cauna, 1965) that the number of corpuscles in the fingertip decreases with age, and it may be
conjectured that young adults have 25-50 corpuscles in each fingertip.
Data are available for vibrotactile threshold as a function of frequency at various body sites,
for example, the thenar eminence on the palm of the hand (Bolanowski et al., 1988) and the distal
pad of the fingertip (Verrillo, 1971). For vibration frequencies in the range 200-300 Hz, tactile
thresholds at most body sites are determined by the Pacinian response. In general, it appears that
vibrotactile thresholds are higher for areas with lower densities of mechano-receptors, such as the
torso or forearm, and lower for areas with higher densities of receptors, such as the fingertip and
palm. It may be hypothesised that this relation should be valid when comparing localised
stimulation sites within a single fingertip. However, the literature for comparison of sites within the
hand is limited (Löfvenberg and Johansson, 1984).

MRI investigation
Images of the index fingertip were acquired from two subjects, ages 22 and 24 years. Imaging
was performed with a Philips whole-body imager at 1.5 T. A fat-suppression MRI technique was
used to produce 3D data sets with a slice thickness of 140 μm and an in-plane resolution of 140 μm
× 140 μm. The data sets were examined manually to identify structures with the ovoid shape of a
Pacinian corpuscle and of appropriate size (i.e., major-axis length ~ 1 mm). Figure 1 shows 2D
projections of the locations of the identified objects – 32 for subject A and 30 for subject B. The
range of major-axis lengths was 0.7 – 1.3 mm for subject A and 0.7 – 1.8 mm for subject B; the
range of minor-axis lengths was 0.2 – 0.7 mm for subject A and 0.4 – 0.7 mm for subject B.

Subject A Subject B

Figure 1. Identified locations within the right index distal phalange of the two subjects. The U-
shaped lines indicate the outlines of the fingers at their widest points. The insets are single
slices from the 3D MRI data sets, with corresponding features circled.
118
Figure 1 shows that the objects identified as Pacinian corpuscles are clustered off the midline of the
fingertip, in close proximity to the expected locations of the digital nerves, matching the
distributions described by Stark et al. (1998).

Psychophysics investigation
Further experimentation was undertaken using a purpose-built vibrotactile stimulator (Figure
2(a)) to measure the detection threshold at two positions on the right index finger: in the centre of
the fingerpad and towards the side (displaced from the midline by 7.0 mm). Vibratory stimulation
was at 250 Hz and detection of stimuli was investigated over a range of displacement amplitudes
(i.e., using the method of fixed stimuli for threshold determination). Figure 2(b) shows
psychometric curves (mean data over each stimulation site) for 14 young- adult subjects.

100
(a) (b)

Stimulus Detection (%)


50

-46 -44 -42 -40 -38 -36 -34 -32 -30 -28 -26 -24 -22 -20
0
-44 -42 -40 -38 -36 -34 -32 -30 -28 -26 -24 -22 -20
Stimulus Level (dB)

Figure 2. (a) The vibrotactile stimulator, showing the curved surface on which the finger rests
and the two stimulation sites within this surface; (b )psychometric curves – the full black line
shows mean data from the side of the finger and the full grey line shows mean data from centre
of the finger pad. The dotted lines indicate the ranges of standard error.

The difference in threshold (defined as 50% detection rate) between the two sites was determined to
be (5.4 ± 1.8) dB. The threshold at the side of the finger was ~ 1.0 μm and the threshold at the
midline was ~ 2.0 μm.

Conclusions
The MRI results indicate a non-uniform distribution of Pacinian receptors in the fingertip,
with a lower density towards the centre of the fingerpad than toward the sides. The psycho-physics
results indicate a non-uniform vibrotactile sensitivity over the fingertip, with a higher threshold on
the midline and a lower threshold toward the side. This lends support to the hypothesis that higher
receptor density is associated with lower threshold, and vice versa.

References
Bolanowski, S. J., Gescheider, G. A., Verrillo, R. T., & Checkowsky, C. M. (1988). Four channels mediate the
mechanical aspect of the sense of touch. Journal of the Acoustical Society of America, 84, 1680–1694.
Cauna, N. (1965). The effects of aging on the receptor organs of the human dermis, Advances in Biology of Skin, vol VI,
Aging. Edited by W Montagna. Oxford, Pergamon Press, pp 63–96.
Löfvenberg, J., & Johansson, R. S. (1984). Regional differences and interindividual variablity in sensitivity to vibration
in the glabrous skin of the human hand. Brain Research, 301, 65-72.
Stark, B., Carlstedt, T., Hallin, R. G., & Risling, M. (1998). Distribution of human Pacinian corpuscles in the hand. A
cadaver study. Journal of Hand Surgery [Br.], 23, 370-372.
Verrillo, R. T. (1971). Vibrotactile thresholds at the finger. Perception & Psychophysics, 9, 329–330.

Acknowledgements
The authors thank Philips Medical Systems for scientific support.
3rd International Conference on Enactive Interfaces (Enactive /06) 119
Influence of the manual reaching preparation movement on visuo-spatial
attention during a visual research task

Alexandre Coutté & Gérard Olivier


Laboratoire de Psychologie Expérimentale (EA 1189), University of Nice-Sophia Antipolis, France
atsabane@wanadoo.fr

Introduction
This research aims at focusing on the motor dimension of visuospatial attention. According to
Rizzollatti and Craighero (1998), visuo-spatial attention results from the pre-activation of the
sensori-motor neural networks that allows interaction with the environment. Attention would be
oriented toward the direction of the prepared manual and/or ocular movement. Results presented in
this experiment suggest that efficiency of visual attention is on the one hand related to the manual
response preparation induced by the experimental situation, and on the other hand related to the
manual movement that would allow reaching the intruder.

Procedure
During a visual search task, 24 right-handed subjects have to decide if an intruder is present
among the six dies presented on a computer screen (see Figure 1). Before each try, the subject put
either his right index or his left one on the departure button, and he has to prepare either a grasping
of a far situated interrupter or a grasping of a near situated one (see Figure 1). Keeping the
departure button activated enhances successive apparition of a central fixation point, followed by
six dies. Subject has to execute as fast as possible the prepared response when he detects the
intruder. Otherwise, he has to release the departure button when he’s sure that there’s no intruder.
The computer records the subject’s responses and the Reaction Times occurring between the dies
apparition and the moment when the index of the subject leaves the departure button.

Figure 1. Example of stimulus (left) and response dispositive (right) used during this experiment.

Hypotheses
Hypothesis 1: If the subject prepares a distal manual response, the Reaction Times will be
shorter when the intruder is in the distal row than when it’s in the proximal one. Likewise, if the
subject prepares a proximal manual response, the Reaction Times will be shorter when the intruder
is in the proximal row than when it’s in the distal one. We expect therefore an interaction between
the depth of the intruder’s position (proximal row versus distal row) and the position of the
response interrupter (proximal versus distal).
Hypothesis 2: If a subject prepares a response with the right hand, the Reaction Times will be
shorter when the intruder is localized in the right column than when it’s in the left one. Likewise, if
the subject prepares to respond with the left hand, the Reaction Times will be shorter when the
intruder is localized in the left column than when it’s in the right one. We expect therefore an
interaction between the lateral position of the intruder (right versus left) and the hand that prepares
to respond (right versus left).
120
Results
The Reaction Times are significantly lower when there is an intruder than when there aren’t
any (F(1,24)=61,47; p<.001), which confirm the sequential nature of the dies exploration (Treisman
& Sato, 1990). Furthermore, the far situated intruders are detected faster than the near ones
(F(1,24)=14,33; p<.001), which suggest subjects begin their ocular exploration with the distal row.
Theses results are in accordance with the expectations of hypothesis 1. The Figure 2 shows indeed
for the Reaction Times an interaction effect between the depth of the intruder’s position (proximal
row versus distal row) and the position of the response interrupter (proximal versus distal)
(F(1,24)=8,29; p<.01). Means comparisons suggest that when a subject prepares to respond on the
distal button, the distally graspable intruders are identified faster than the proximally graspable
ones.

Figure 2. Mean reaction times as a function of intrus and response switch positions (left
drawing) and as a function of intrus position and response hand (right drawing)

The results aren’t in accordance with hypothesis 2 (see Figure 2; right drawing). The
interaction effect between for the reaction times between the laterality of the intruder’s position and
the responding hand isn’t significant (F(1,24)=2,39; p=.135). Nevertheless, if the intruder is
localized in the right column, the subject reacts faster when he prepares a right hand movement than
when he prepares a left hand movement (F(1,24)=6,37; p<.025).

Discussion
Everything occurs as if there were a compatibility effect between the two reaching
movements: on the one hand, the prepared movement for reaching the response interrupter, and on
the other hand the movement to reach the intruder, and whose mental simulation is enhanced by the
visual perception of the intruder (Olivier, 2006). Actually, the facilitation effect observed when the
responding hand is the most appropriated to reach the stimulus (Tucker & Ellis, 1998), seems to be
limited to the dominant hand of the subjects, and isn’t observed for the left hand. To conclude, these
results bring arguments to the pre-motor theory of attention, and to the multi-modal conception of
attention: orienting attention toward a part of space consists of preparing a manual movement, or an
ocular one, toward this direction.

References
Olivier, G. (2006). Visuomotor priming of a manual reaching movement during a perceptual decision task. Brain
Research, (in press).
Rizzolatti, G., & Craighero, L. (1998). De l’attention spatiale à l’attention vers des objets: une extension de la théorie
prémotrice de l’attention. Revue de Neuropsychologie, 8, 155-174.
Treisman, A., & Sharon, S. (1990). Conjunction search revisited. Journal of Experimental Psychology: Human
Perception and Performance, 16, 459-478.
Tucker, M., & Ellis, R. (1998). On the relations between seen objects and components of actions. Journal of
Experimental Psychology: Human Perception and Performance, 24, 830-846.
3rd International Conference on Enactive Interfaces (Enactive /06) 121
Finding the visual information used in driving around a bend:
An experimental approach

Cécile Coutton-Jean, Daniel Mestre & Reinoud J. Bootsma.


UMR 6152 “Mouvement et Perception”, CNRS-University of the Mediterranean, France
cecile.coutton-jean@univmed.fr

Introduction
Driving a vehicle along a winding road is a task that requires detection and integration of
visual information, allowing the driver to regulate both the direction and speed of displacement
(Reymond, Kemeny, Droulez and Berthoz, 2001). Prospective information not only informs the
driver about the current situation, but also about the match between the future path of travel and the
geometry of the road.

In the literature, several strategies for steering have been proposed. Studying the direction of
gaze, Land and Lee (1994) proposed that the drivers use the tangent point (TP, a singularity in the
optic flow) in a curve as an indicator of the upcoming road curvature. Alternatively, Wann and
Land (2000) suggested that fixating any given point on the intended path could also provide an
indication of road curvature. According to a different logic, Van Winsum and Godthelp (1996)
reported that drivers maintained a constant minimum time-to-line crossing (TLC) over curves of
varying radius. TLC is defined as the time necessary for any part of the vehicle to reach an edge of
the lane, under the assumption that the driver would continue along the same travel path at the same
speed (Godthelp & Konings, 1981). Spatio-temporal quantities such as the location (and motion) of
the tangent point or the time remaining until the vehicle will reach the lane boundaries are useful
descriptors of the driver’s behaviour, but it remains to be demonstrated that they are actually
perceived and used in the control of driving.
In order to determine the source(s) of information used by drivers in controlling their vehicle,
we experimentally manipulated the road characteristics during driving around bends. By laterally
displacing one or both of the lane boundaries and monitoring the adjustments made by drivers in the
speed and orientation of the car, we should be able to better pinpoint the mechanisms underlying the
control of steering. As the radius of curvature is known to influence both the speed of approach and
the way in which the steering wheel is turned (Godthelp, 1986; Van Winsum and Godthelp, 1996),
we evaluated behaviour in curves of varying curvature.

Method
Ten participants took part in the experiment. All held a driving licence and had being driving
for at least 5 years. They all had normal or corrected-to-normal vision.
The experimental platform was animated by a PC computer, equipped with a fast graphics
card and running dedicated software developed in our laboratory (ICE by C.Goulon). Images were
generated at a frame rate of 75 Hz and back-projected onto a large screen (3.0 x 2.3 m), placed 2 m
in front of the participant, thereby subtending a total horizontal angle of 75 degrees. Participants
controlled the direction of travel of the simulated vehicle by means of a steering wheel (ECCI)
mounted in line with their body-axis. A set of foot pedals (accelerator and brake) allowed
controlling the speed of the vehicle. Inputs from the steering wheel and pedals were integrated
online into the motion of the vehicle, using a model of vehicle dynamics.
Each trial presented the driver with a visual environment that contained a ground plane,
textured with gravel, and two superimposed white lines defining the road. The simulated tracks
always comprised 8 curves (constituting a 90° change in direction) with 4 radii of curvature (50,
150, 300, and 500 m), separated by straight lines segments of 700 m. Half of the curves were
oriented to the left and the other half to the right. The lane could be either 3.60 m or 7.20 m wide.
While participants negotiated a curve, the edge lines could move inward or outward (over a distance
of 1.80 m), independently or conjointly. Participants performed 3 trials. In each trial, experimental
122
conditions were pseudo-randomly assigned to the sequence of curves. There was also a control
condition for each lane width. The experiment was run in 5 sessions of 6 trials. Half the participants
started with “normal lane” sessions, followed by “large lane” sessions, while to order was inversed
for the other half. On the first session, 2 training trials were added. The participants were asked to
drive as fast as possible without ever leaving the road.

Results
In order to analyse how lane boundary manipulations influenced steering behaviour, we
analysed, for each subject, each curve and each trial the evolution of (a) the lateral position in the
lane, (b) the orientation of the steering wheel and (c) the vehicle's speed. Results will be discussed
with respect to two hypotheses: (1) If drivers modify their behaviour when the internal line was
displaced, this suggest that they use information located on the inside edge of the road. This result
would confirm the possible use of TLC, as defined by Godthelp (1986) and would support the
tangent point strategy (Land and Lee, 1994); (2) If drivers modify their behaviour when the external
line is displaced, this would suggest that information with respect to the outside of the bend is used.
This result would require (re)defining TLC to more closely resemble the time to contact proposed
by Lee (1976). An influence of both would suggest that current models need to be extended so as to
include notions such as tolerance for lateral deviation within a lane.

References
Godthelp, J. (1986). Vehicle control during curve driving. Human Factors, 28, 211-221.
Godthelp, J., & Konings, H. (1981). Levels of steering control; some note on the time-to-line crossing concept as
related to driving strategy. First European Annual Conference on Human Decision Making and Manual Control,
pp. 343-357, Delft, The Netherlands.
Land, M. F., & Lee, D. N. (1994). Where we look when we steer. Nature, 369, 742-744.
Lee, D. N. (1976). A theory of visual control of braking based on information about time-to-collision. Perception, 5,
437-459.
Reymond, G., Kemeny, A., Droulez, J., & Berthoz, A. (2001). Role of lateral acceleration in curve driving: driver
model and experiments on a real vehicle and driving simulator. Human Factors: The Journal of the Human
Factors and Ergonomics Society, 43(3), 483-495.
Van Winsum, W., & Godthelp, J. (1996). Speed choice and steering behavior in curve driving. Human Factors, 38,
434-441.
Wann, J. P., & Land, M. (2000). Steering with or without the flow: is the retrieval of heading necessary? Cognitive
Sciences, 4(8), 319-324.
3rd International Conference on Enactive Interfaces (Enactive /06) 123
Does Fitts' law sound good?
Amalia de Götzen1,2, Davide Rocchesso2 & Stefania Serafin3
1
Department of Information Engineering, University of Padova, Italy
2
Department of Computer Science, University of Verona, Italy
3
Department of Medialogy, University of Aalborg, Denmark
degotzen@dei.unipd.it

In our everyday life we perform pointing tasks hundred times a day. These tasks are very
different according to the purpose of the gesture. We might be quite accurate and fast when we are
putting a stamp on a precise place of a number of documents, while we might be just fast and less
accurate when we are indicating the direction to take in a crossing. There are a lot of examples of
pointing tasks, each of them characterized by a precise feedback (visual, haptic or auditory) and by
a precise trade off between accuracy and speed. This work concentrates in particular on Fitts' tasks
perfomed under continuous auditory feedback and hypothesizes the influence of feedback on
velocity profiles; all Fitts' parameters are transposed in the auditory domain.

Introduction
In 1954 Paul Fitts published the first paper (Fitts, 1954) about a mathematical model applied
to the human motor system. He pointed out that the human target acquisition performance could be
measured in information units using some indexes derived from the Shannon Theory of
Information. Around 20 years later the findings of Fitts’ studies have been interestingly applied to
Human Computer Interaction, a young branch of Computer Science, starting with the work of Card
(Card, English & Burr 1978). Human Computer Interaction researchers deeply investigated the use
of Fitts’ law as a predictive model to estimate the accomplishment time of a given task, or to
compare different input devices. Nowadays Fitts’ law is coded in an ISO standard [1], describing
the evaluation of pointing devices, and it is still under debate for several aspects: the mathematical
formulation, the theoretical derivation, the range of application etc. (Guiard & Beaudoin-Lafon,
2004). This paper is a work in progress concerning the application of Fitts’ law in the audio and
multimodal domain. The investigation is suggested by the important role played by multi–modality
and multi–sensory communication in the design of enactive interfaces. Non–speech communication
will play a crucial role inside the information stream established between machines and users.

A crash course on Fitts' law


Fitts' laws is a predictive law that allow to estimate the movement time to reach a target
knowing the width of the target itself and distance to the target. Skipping the analysis of all the
variations on Fitts’ law (MacKenzie, 1991) the version used in the ISO standard and commonly
used for research purposes is the following one:
MT = a + b log2(A/W + 1)
Were MT is the movement time; a, b are the regression coefficients; A is the distance of movement
from start to target center; W is the width of the target. A couple of other indexes are used to
evaluate the interaction:
ID = log2(A/W + 1)
IP = ID/MT
Where ID is the index of difficulty while IP is the index of performance, analogous to channel
capacity C of the Shannon theorem.

The experimental setup


The purpose of this work is to study Fitts' law in an auditory perspective: the model is
analyzed providing different audio interactive displays and interfaces in which subjects have to
124
perform tuning tasks, hearing a simple sinusoid as feedback. Subjects are asked to reach a target
frequency under different modalities (audio, visual and multimodal) as fast and accurate as possible;
they can make errors. In the auditory domain we consider distance in frequency to represent the
corresponding distance in pixels in the visual domain. We explored different frequency regions to
perform the tasks, varying the Control-Display ratio. During the research process we choose
different representations for the target width: we started up with a narrow band of white noise and
we ended up varying the duration of the target frequency. Assuming that the shorter is its duration,
the bigger will be the width of the effective target. All tests have been performed using Pure Data
with its GEM library (www.puredata.info).

Conclusion
The auditory feedback seems to be really difficult to be used without a visual feedback but it
can help the performance in multimodal tasks, in particular with high ID values. One interesting
aspect observed during the experiments is that subjects do not perceive to be helped by the audio
feedback even if IP values show that they use it extensively in difficult tasks. A general behaviour
has been noted: with the auditory feedback subjects improve the accuracy to the detriment of speed.
The velocity profiles show a slow and accurate approach to the target frequency when the auditory
feedback is used, while a big overshooting is the characteristic of the visual one. Within this general
behaviour we noticed also a distortion of the velocity profiles when increasing the ID value, a result
that is in accordance with previous studies (Guiard & Beaudoin-Lafon, 2004). This aspect can be
very important if applied to the control of musical sequences.

References
Card, S., English, W., & Burr, B. (1978). Evaluation of mouse, rate -controlled isometric joystick, step keys, and text
keys for text selection on a crt. Ergonomics, 21, 601–613.
Fitts, P. M. (1954). The information capacity of the human motor system in controlling the amplitude of movement.
Journal of Experimental Psychology, 47(6), 381-391.
Guiard, Y., & Beaudoin-Lafon, M. (2004). Fitts’law 50 years later: application and contributions from human-computer
interaction. International Journal of Human -Computer Interaction Studies, 61, 747–904.
MacKenzie, I. (1991). Fitts’ law as a performance model in human-computer interaction. Doctoral dissertation,
University of Toronto, Toronto, Ontario, Canada.
[1] Ergonomic requirements for office work with visual display terminals (vdts)–part 9–requirements for non-keyboard
input devices (iso 9241-9) international organisation for standardisation. ISO 9241-9:2000(E), 15, 2002.
3rd International Conference on Enactive Interfaces (Enactive /06) 125
Fractal models for uni-manual timing control

Didier Delignières, Kjerstin Torre & Loïc Lemoine


Motor Efficiency and Deficiency Laboratory, University Montpellier 1, France
didier.delignieres@univ-montp1.fr

Robertson et al. (1999) showed that timing variability in discrete tasks (tapping) was not
correlated with timing variability in continuous tasks (circle drawing). Zelaznik, Spencer & Ivry
(2002) proposed to distinguish between event-based timing (discrete tasks), and emergent timing
(continuous tasks). Event-based timing is conceived as prescribed by events produced by a central
clock, and emergent, or dynamical timing refers to the exploitation of the dynamical properties of
the effectors.

We recently showed that these two timing processes present distinct spectral signatures,
contrasted by opposite behaviors in high frequencies (Delignières, Lemoine & Torre, 2004). Event-
based timers are characterized by a positive slope in the high frequency region of the log-log power
spectrum, whereas dynamical timers are revealed by a simple flattening of the spectrum in this
region. We also evidenced that both timers produced interval series possessing fractal properties.

The aim of this paper is to present two models that produce time interval series presenting the
statistical properties previously evidenced in discrete and continuous rhythmic tasks. The first one is
an adaptation of the classical activation/threshold models. This ‘shifting strategy model’ was
introduced by Wagenmakers, Farrel and Ratcliff (2004), and suggests that (1) the threshold presents
plateau-like non-stationarity over time, and (2) the speed of activation varies between successive
trials according to an auto-regressive process. This first process is completed by the addition of a
differenced white noise, according to the principles of the Wing and Kristofferson (1973)’s model.
This first model was expected to take account for the time series observed in tapping experiments.

The second model is derived from the ‘hopping model’ proposed by West and Scafetta (2003).
A Markov chain obeying an auto-regressive process is supposed to represent a set of (correlated)
potential states of the effector. A random walk is performed on this chain, providing the series of
stiffness parameters of a dynamical hybrid model (Kay et al., 1987). This second model was
expected to take account for the time series observed in unimanual oscillations.

As showed in Figure 1, both models reproduced satisfactorily the spectral signatures of,
respectively, event-based and dynamical timing processes. The models also produced auto-
correlation functions similar to those observed experimentally, with a negative lag-one
autocorrelation for the shifting strategy model, and then slightly positive, and persistent correlation
for lag superior to one, and a persistent, power-law auto-correlation function for the hopping model.
We show, using ARFIMA modeling, that these simulated series possess fractal properties.
126

0 1.5

-0.2
1.3

-0.4
1.1

Log(Power)
Log(Power)

-0.6
0.9
-0.8

0.7
-1

-1.2 (1) 0.5


(2)
-1.4 0.3
-2.5 -2 -1.5 -1 -0.5 0 -2.5 -2 -1.5 -1 -0.5 0
Log(frequency) Log(frequency)

1.5 1.3

1
0.8

0.5
Log(Power)

Log(Power)
0.3
0

-0.2
-0.5
(3) (4)
-1 -0.7
-2.5 -2 -1.5 -1 -0.5 0 -2.5 -2 -1.5 -1 -0.5 0
Log(frequency) Log(frequency)

Figure 1. Log-log power spectra obtained (1) in a tapping experiment, (2) with the shifting strategy
model, (3) in a uni-manual oscillation experiment, and (4) with the hopping model.

These two models provide believable solutions for simulating simple timing behaviors. They
could in the future allow a better understanding of more complex processes, and notably the control
of relative timing in coordination tasks. On a more theoretical point of view, both models show that
fractal fluctuations could arise from the combination of an auto-regressive process and a random
walk, suggesting a possible universal source for 1/f noise in natural systems.

References
Delignières, D., Lemoine, L., & Torre, K. (2004). Time intervals production in tapping and oscillatory motion. Human
Movement Science, 23, 87-103.
Kay, B. A., Saltzman, E. L., Kelso, J. A. S., & Schöner, G. (1987). Space-time behavior of single and bimanual
rhythmical movements: Data and limit cycle model. Journal of Experimental Psychology: Human Perception
and Performance, 13, 178-192.
Robertson, S. D., Zelaznik, H. N., Lantero, D. A., Bojczyk, G., Spencer, R. M., Doffin, J. G., & Schneidt, T. (1999).
Correlations for timing consistency among tapping and drawing tasks: evidence against a single timing process
for motor control. Journal of Experimental Psychology: Human Perception and Performance, 25, 1316-1330.
Wagenmakers, E.-J., Farrell, S., & Ratcliff, R. (2004). Estimation and interpretation of 1/fα noise in human cognition.
Psychonomic Bulletin & Review, 11, 579–615.
West, B. J., & Scafetta, N. (2003). Nonlinear dynamical model of human gait. Physical Review E, 67, 051917.
Wing, A. L., & Kristofferson, A. B. (1973). The timing of interresponse intervals. Perception and Psychophysics, 13,
455-460.
Zelaznik, H. N., Spencer, R. M., & Ivry, R. B. (2002). Dissociation of explicit and implicit timing in repetitive tapping
and drawing movements. Journal of Experimental Psychology: Human Perception and Performance, 28, 575-
588.
3rd International Conference on Enactive Interfaces (Enactive /06) 127
Modification of the initial state of the motor system alters movement prediction
Laurent Demougeot & Charalambos Papaxanthis
INSERM/ERIT-M 0207 Motricité-Plasticité, University of Bourgogne, France
Laurent.Demougeot@u-bourgogne.fr

Background and aims


Motor imagery is a conscious process during which subjects internally simulate a motor action
without actually performing it (i.e. without any apparent movement or muscle contraction). The
motor imagery process involves brain regions essential to the performance of motor actions (Decety
and al., 1996). Besides, it was reported that durations of overt and covert movements were very
similar (isochrony) for various types of movements (Papaxanthis and al., 2002). At the time of
imagined movements, the brain simulates movement sensory consequences, which were brought
back during vibratory stimulations of wrist, creating thus an illusion of motion (Naito and al.,
2002). In the present study, we checked if the alteration of the sensory system, via vibratory
stimulation, could induce modifications on the level of movement central representation, i.e. on
movement internal simulation. Furthermore, it’s not only during the application of vibrations that
sensory information is modified. Indeed, the distortion of sensitive input persists for a period of
time after the end of the stimulation (Wierzbicka and al., 1998). Consequently, we also examined
post-vibratory effects on imagined movements.

Methods
Session 1. Ten healthy students had to walk or to mentally simulate walking along a rectilinear
trajectory of nine meters. During the actual or imagined locomotion vibratory stimulation
(frequency of stimulation: 80Hz) was selectively applied bilaterally on various muscles (Triceps
Surae, Tibialis Anterior, Rectus Femoris, Biceps Femoris and Wrist flexors).
Session 2. The same subjects were asked to perform the same motor task but after the end of
the vibratory stimulation. Vibration was applied during 40 seconds while subjects were immobile.
Immediately after vibration had ceased, subjects actually executed or imagined 30 walking
movements.
For both sessions, the subjects were instructed to perform the task at a comfortable speed
without further instructions about velocity or movement duration. We measured actual and
imagined movement durations.

Results
For the first session, we observed similar effects of muscle vibration for both actual and
imagined movements (Figure 1). For instance, the stimulation of muscle Biceps Femoris induced a
significant temporal reduction (p=0.0001) in the actual (-7.57%) and imagined execution (-7.08%)
of the movement with respect to the control condition, i.e. without vibration. Vibration of the other
muscles did not change the temporal features of the actual and imagined movements.
Mean movement duration (s) ± SD

******
Actual movement
Imagined movement 7

Fig. 1 – Mean duration (n =


6
10) of actual and imagined
movements. Vibratory
stimulation was selectively 5
applied bilaterally on various
muscles.
4
No Triceps Tibialis Biceps Rectus Wrist
vibration surae anterior femoris femoris flexors

Figure 1. Mean duration (n=10) of actual and imagined movements. Vibratory stimulation was
selectively applied bilaterally on various muscles.
128
For the second session, the temporal analysis (Figure 2) showed a durability of the vibratory
effect, until five minutes after the end of the stimulation, for both actual and imagined movements.
Precisely, imagined and actual movements durations decreased significantly (respectively; 7.85 %
and 3.50 %; p<0.05) immediately after (1-5 trials) the end of the stimulation of the muscle Biceps
Femoris. Then, they were reaching the control values (i.e. without vibration) gradually (5-20 trials).
However, vibratory effects were more important for the imagined than the actual movements.

Biceps femoris
Mean movement duration (s)

7,5

7,3 Muscle vibration (40s)


*** Adaptation
7,1

Fig. 2 – Mean duration (n = 10)


6,9 of actual and imagined
movements. Each block refers
*** to the average of five trials of all
6,7 subjects. The three first blocks
0-5 6-10 11-15 0-5 6-10 11-15 16-20 21-25 26-30
refers to trials without vibration.
Trials

0’’ 4’56’’ 7’10’’


Figure 2. Mean duration of actual and imagined movements. Each block refers to the average
of five trials of all subjects. The three first blocks refers to trials without vibration.

Conclusion
The theoretical framework of internal models allows us to propose a possible explanation for
our results. During movement simulation, the current or previous alteration of the proprioceptive
information induced by muscle vibration, is available to the forward internal model which, on the
only basis of this biased sensory information, predicts the future states of the system (dynamic
prediction) and their sensory consequences (sensory prediction). Thus, the initial alteration of the
sensorimotor system is replicated during movement prediction, in our case during imagination.
During actual movement execution in the post-vibratory condition (session 2), this alteration exists
but for a lesser time compared with the imagined execution, because sensory input during
locomotion, which is absent during imagination, recalibrates the sensorimotor system allowing thus
a more rapid recovery.

References
Decety, J. (1996). The neurophysiological basis of motor imagery. Behavior Brain Research, 77, 45-52.
Naito, E., Roland, P. E., & Ehrsson, H. H. (2002). I feel my hand moving: a new role of the primary motor cortex in
somatic perception of limb movement. Neuron, 36, 979-988.
Papaxanthis, C., Schieppati, M., Gentili, R., & Pozzo, T. (2002). Imagined and actual arm movements have similar
durations when performed under different conditions of direction and mass. Experimental Brain Research, 143,
447-452.
Wierzbicka, M. M., Gilhodes, J. C., & Roll, J. P. (1998). Vibration-induced postural posteffects. Journal of
Neurophysiololy, 79, 143-150.
3rd International Conference on Enactive Interfaces (Enactive /06) 129
Plantar pressure biofeedback device for foot unloading: Application to the
diabetic foot

Aurélien Descatoire, Virginie Femery & Pierre Moretto


Laboratoire d’Etude de la Motricité Humaine, University of Lille 2, France
pierre.moretto@univ-lille2.fr

Background
Recent progress in measurement tools such as force or pressure transducers enables us to
consider new devices for rehabilitation, particularly in the rehabilitation of a pathologic gait pattern
(Cavanagh et al., 1992). Measurements of plantar pressure distribution play a crucial role in the
assessment and the treatment of foot disorders or, more generally, in gait disturbances. They enable
us to study and quantify the events at the foot-ground interface and could be used in the
development of a biofeedback method. Femery et al (2004) have developed and test a control
device that provide both visual and auditory feedback and is suitable for correcting plantar pressure
distribution patterns.
The aim was here to improve the biofeedback device for application in neuropathic foot
ulceration prevention and to verify the feasibility of a low foot unloading that suppose a high motor
control during walking tests.

Methods
In the first study, the force-sensing resistor (FSR) enables us to measured the pressure with a
10% error insufficient for application in diabete survey. The Paromed hydrocells (Paromed Gmbh,
Germany) consists in a capsule filled with incomprehensible fluid which involves a pressure from
the different component of the ground reaction force. A Footscan plate allowed repairing high
pressure footprint locations during a first test performed barefoot. Two insoles (right and left) were
thus customized comprising 8 Paromed hydrocells in each distributed under these locations.

Figure 1. Biofeeback system


130
Experimental protocol
Eight healthy subjects performed on the walking test. They first performed a walking test to
record the peak plantar pressure distribution in normal condition (PPNC). The PPcr was then
defined as 5% below the PPNC. The subject was told to unload (PPUN) M1 by 5% (± 2,5%). This
area was chosen for the trial because it is an at-area risk well known for the development of plantar
ulceration in neuropathic diabetic subjects. The visual feedback was continually given, whereas the
auditory signals were only fed back when the local pressure under M1 exceeded the PPCR.

Results and interpretation


Five subjects on eight satisfy the 5% unload under M1. Moreover, results show that 5%
(±2,5%) unloading permit to relieve M1 without overload other anatomical foot locations and the
contralateral foot (Figure 2).
To conclude, this study of our visual and auditory feedback system is encouraging. The device
provides a warning system that may play a valuable role in preventing injuries or ulceration by
changing the walking pattern in patients who have lost nociceptive perception.

Figure 2. Pourcentage of differences between normal condition and an instruction of 5%


unload under M1, A: controlateral foot, B: unload foot, LH: lateral heel, MH: medial heel, LM:
lateral midfoot, MM: media midfoot, M1 to 5: metatarsal 1 to 5.

References
Cavanagh, P., Hewitt, F., & Perry, J. (1992). In-shoe plantar measurements: a review. Foot, 2, 185-94.
Femery, V. G., Moretto, P. G., Hespel, J. M., Thevenon, A., & Lensel, G. (2004). A real-time plantar pressure feedback
device for foot unloading. Archives of Physical Medicine and Rehabilitation, 85, 1724-8.

Acknowledgments
This investigation was supported by funds from the Conseil Régional Nord-Pas de Calais, the
Délégation Régionale à la Recherche et à la Technologie du CHRU de Lille and the Institut
Régional de Recherche sur le Handicap.
3rd International Conference on Enactive Interfaces (Enactive /06) 131
Oscillating an object under inertial or elastic load: The predictive control
of grip force in young and old adults

Médéric Descoins, Frédéric Danion & Reinoud J. Bootsma


UMR 6152 Mouvement et Perception, CNRS University of the Mediterranean, CNRS France
descoins@laps.univ-mrs.fr

Introduction
Earlier studies have shown that young adults adjust their grip force (GF) according to the
consequences of their movement in terms of the resulting load force on the object (LFO). A
consistent observation is that GF tends to covary linearly with LFO. Interestingly, this coupling
between GF and LFO has been reported to be weaker under an inertial load (the most common load
experienced in daily life), than under other types of load (e.g., elastic; Descoins et al. 2006; Gilles et
Wing, 2003). A first objective of this study was to investigate the possible reasons underlying these
differences in terms of GF-LFO coupling. In addition, in order to better characterize the reasons
underlying the decline in manual dexterity in older adults, the GF-LFO coupling was studied in
both young and older adults. Three earlier studies have addressed the GF-LFO coupling in older
participants (Lowe, 2001; Cole & Rottela, 2002; Gilles & Wing, 2003), but none of them provided
a clear effect of aging.

Method
The performance on a task that required to rhythmically move at 1.2 Hz an object (m = 0.4 kg)
in the horizontal plane was compared in 12 older adults (mean = 66.3 yrs) and 12 young adults
(mean = 25.0 yrs). Two experimental conditions were tested: 1) with an elastic cord attached to the
object (ELAST), and 2) without an elastic cord (INERT). As shown by Figure 1A, in INERT, LFO
varied essentially at twice the frequency of movement (F2 = 2.4Hz), but when the elastic load and
the inertial were superimposed in ELAST, the resulting LFO varied primarily at F1, the frequency
of movement (see figure 1B).

A INERT B ELAST EL
at Xmax at Xmax
20 IL 20 IL
X (cm)
X (cm)

10 10 EL

0 0 at Xmin IL
at Xmin IL
LFO EL LFO
20
LOAD (N)

5
15
LOAD (N)

0 10
-5 5 IL
IL
0
-5
Fi 1
Figure 1. Schematic drawing showing how total force (LFO) relates to movement trajectory in
each experimental condition. A. Without any elastic cord (INERT). B. With the elastic cord
attached to the object (ELAST). Abbreviations: EL=Elastic load (position dependent load), IL
= Inertial load (acceleration dependent load).

The strength of the coupling between GF and LFO was evaluated by way of cross correlations
between GF and LFO. Subsequently, a Fast Fourier Transform (FFT) was applied to LFO and GF
signals. The rational was to compare the frequency contents of GF signals to the LFO signal. Note
that a condition sine qua non for high correlation between two signals is that they must have similar
spectral pattern (especially at the main components F1 and F2).
132
Results
A consistent finding in this study was that elderly participants deployed larger GF when
moving the object than young participants (30.6 versus 22.1 N). Cross correlation analyses
demonstrated that, compared to ELAST, the GF-LFO coupling was weaker in INERT, especially in
elderly participants. In addition, those analyses revealed that grip force adjustments occurred
slightly ahead of load force fluctuations in young adults (+7ms), whereas they tended to be
somewhat delayed in elderly (-26 ms). Concerning the frequency analyses, Figure 2A indicates that
the spectral patterns of GF and LFO in ELAST were quite similar providing the possibility of a high
correlation between those signals. By contrast GF and LFO spectral patterns were quite different in
INERT (see Figure 2B). Specifically GF modulations contained frequency components at the
frequency of the inertial load (F2) but also at the frequency of movement (F1), with the latter being
more pronounced for the elderly participants.

A
YOUNG ELDERLY
6 12
Amplitude (N)

GF
4
ELAST 8 LFO
2 4
0 0
F1 F2 F1 F2
B Frequency 6
3
2 4
INERT
1 2
0 0
F1 F2 F1 F2

Figure 2. Mean power spectra for the LFO and GF signals in each experimental condition.
A. With the elastic cord attached to the object (ELAST) B. Without the elastic cord (INERT).
Data relative to the young and older participants are presented respectively by the left and
right columns.

Conclusion
Based on the present set of data, we conclude that feedforward control of grip force is
maintained in elderly. However, it seems less able to accommodate load fluctuations varying at
twice the frequency of movement than ones varying at the frequency of movement. We conclude
that the presence of a considerable F1 component in GF underlies the low coupling between GF and
LFO. The mechanism responsible for this particular pattern of GF variation could be due to a
coupling phenomenon between neural oscillators (Kelso 1995); spontaneous GF modulations would
emerge at the frequency of arm movement, independently of the LFO frequency. The neural
mechanisms involved in the predictive control of grip force therefore seem to be altered by age.

References
Cole, K. J., & Rotella, D. L. (2002). Old age impairs the use of arbitrary visual cues for predictive control of fingertip
forces during grasp. Experimental Brain Research, 143, 35-41.
Descoins, M., Danion, F., & Bootsma, R. J. (2006). Predictive control of grip force when moving object with an elastic
load applied on the arm. Experimental Brain Research, 172, 331-342.
Gilles, M. A., & Wing, A. M. (2003). Age-related changes in grip force and dynamics of hand movement. Journal of
Motor Behavior, 35, 79-85.
Kelso, J. A. (1995). Dynamic Patterns: The Self-Organization of Brain and Behavior. Bradford, MIT.
Lowe, B. D. (2001). Precision grip force control of older and younger adults, revisited. Journal of Occupational
Rehabilitation, 11, 267-279.
3rd International Conference on Enactive Interfaces (Enactive /06) 133
Expressive audio feedback for intuitive interfaces
Gianluca D’Incà & Luca Mion
Department of Information Engineering, University of Padova, Italy
gianluca.dinca@dei.unipd.it

The implementation of intuitive and effective audio communication is an important aspect in


the development of multimodal interfaces, adapting the Human-Computer Interfaces to the basic
forms of human communication. Sounds are used to give environmental information and to
stimulate different behavioral reactions of the user: the auditory system allows the sounds to travel,
to be heard and finally caught by the user even when paying attention to another modalities. The
auditory information is usually provided via icons. In the sonification literature, two different
viewpoints have been developed: one regards the use of Earcons, that are abstract musical tones
that can be used in structured combinations to create sound messages (Brewster, 1999). The second
approach uses Auditory Icons, which refer to everyday listening sounds that can be either simple
sounds or patterns (Gaver, 1986). Both these two fields are widely explored, and they use different
kinds of icons to provide auditory information. In order to design intuitive interfaces, the expressive
content of sounds can be taken into account since the expression is a key factor of human behavior,
and it supplies information beyond the rational content of explicit messages represented by texts or
scores. Various synthesis models for the expressive rendering of music performances have been
proposed (Canazza, 2000; Friberg, 2000). Beyond the interpretation of a score, expressive synthesis
models can concern tones and simple sounds without any musical structure. The control of the
expression concerning these non-structured sounds was less explored, and we are interested on this
aspect for the enhancement of audio in interfaces. The expressive information can be used to
communicate affective content when reacting to the actions of the user. In particular, a continuous
control of the sound feedback can provide a more intuitive and more effective interaction than an
iconic audio feedback. To perform this control, an important step is to find the mapping between
acoustic parameters and the expression. In the field of music performance, expression is mainly
controlled using timing and intensity, but when considering non-structured sounds further
characteristics must be taken into account. For example, studies on Auditory Warnings design
(Stanton, 1994) and on musical gestures played by musicians (Mion, 2006a) underlined the
importance of additional features by means of perceptive tests.

Our approach to model the communication of expression through the auditory channel is
based on the analysis of music performances played by real musicians (Mion, 2006b). We found
that different level of arousal and valence can be effectively communicated by single tones, using
perceptual features beside timing and intensity. We focused on attack, spectral centroid and
roughness because in our analysis these features were found to be significant for expression
discrimination and their importance was confirmed by listening tests on synthesized sounds. In
particular, the results of listening tests showed that increasing the attack time, the perceived arousal
decreases. Furthermore, increasing the spectral centroid, the arousal increases and the valence
decreases. Finally, increasing roughness clearly decreases the perceived valence. Figure 1 shows the
mapping between perceptual features and perceived valence and arousal. This approach does not
refer to any musical structure, yielding to an ecological approach of expression description. In fact,
perceptual features can be associated to physical descriptions by means of physical metaphors:
roughness can be mapped to texture properties like rough/smooth, attack time can be related to
viscosity (sudden/loose), while spectral centroid can be associated to brightness characteristics
(bright/dark).
134

Figure 1. Geometric interpretation of perceptual features mapping: projection of the features


on the perceived arousal-valence space.

The results of these studies can be used to develop interfaces in which the continuous audio
feedback intuitively drives the user through scenarios, or to assist the user in performing tasks. In a
first implementation, a very simple task can be considered: the user has to reach a small area on the
screen with the mouse pointer. The continuous sound feedback is determined by the distance
between the pointer and the target area: the higher is the distance, the higher is the arousal and the
lower is the valence of the sound feedback. For example, when the pointer gets closer to the target
area the sound becomes more smooth, the attack is more loose and the spectral centroid decreases.
In this way the task can be performed using only the auditory information: the continuous feedback
guides the user in exploring the scenario, providing an intuitive information which is neither iconic
or symbolic and which can be easily learnt with a few trainings. Preliminary evaluation tests
confirmed the effectiveness of this approach on the affective communication in multimodal
interfaces: the expressive audio feedback provides a rich variety of additional information to the
user, inducing shorter-term states such as moods and emotions that enrich the Human Computer
Interaction.

References
Brewster, S. A., & Grease, M. G. (1999). Correcting menu usability problems with sound. Behaviour and Information
Technology, 18(3), 165-177.
Canazza, S., De Poli, G., Drioli, C., Rodà, A., & Vidolin, A. (2000). Audio morphing different expressive intentions for
Multimedia Systems. IEEE Multimedia, July-September, 7(3), 79-83.
Friberg, A., Colombo, V., Frydén, L., & Sundberg, J. (2000). Generating Musical Performances with Director Musices.
Computer Music Journal, 24(3), 23-29.
Gaver, W. (1986). Auditory icons: using sound in computer interfaces. Human Computer Interaction, 2(2), 167-177.
Mion, L., & D’Incà, G. (2006a). Analysis of expression in simple musical gestures to enhance audio in interfaces.
Special Issue of Virtual Reality. In A. Camurri, A. Frisoli (guest eds.) Multisensory interaction in virtual
environments. Berlin: Springer Verlag. (in press).
Mion, L., & D'Incà, G. (2006b). Expressive Audio Synthesis: From Perfomances to Sounds. Proceedings of the 12th
International Conference on Auditory Display, London, UK, June 20-23.
Stanton, A., & Edworthy, J. (1994). Human Factors in alarm designs. London, UK: Taylor & Francis Ltd.
3rd International Conference on Enactive Interfaces (Enactive /06) 135
The Mathematical theory of integrative physiology: a fundamental framework
for human system integration and augmented human design
Didier Fass1 & Gilbert Chauvet 2
1
ICN Business School and LORIA INRIA Lorraine, France
2
EPHE Paris, France and Biomedical engineering University of southern California, USA
Didier.Fass@loria.fr

Introduction
Augmenting cognition and sensorimotor loops with automation and interactive artifacts
enhances human capabilities and performance. It extends both the anatomy of the body and the
physiology of the human behavior. Designing augmented human using of virtual environment
technologies means integrating artificial and structural elements and their structural interactions to
the anatomy, and artificial multimodal functional interactions to the physiological functions.
Thereby the question is how to couple and integrate in coherent way, a biological system with a
physical and artefactual system, the less or more immersive interactive artifact, in a behaviorally
coherent way by organizational design.
Training or operational systems design requires taking into account technical devices,
artificial multimodal patterns of stimuli together, and their integration into the dynamics of the
human sensorimotor behavior. Thus augmenting human capabilities and enhancing human
performance by using virtual environment technologies needs safe design principles for human
system integration (HSI). To be safe and predictive, HSI models, interaction and integration
concepts, methods and rules have to be well grounded. Just like physical theory and their theoretical
principles ground the mechanics or materials sciences and the engineering rules, e.g. for airplanes
design, HSI needs a theory of integration, a theoretical framework and its general principles.
Augmented human design needs an integrative theory that takes into account the specificity of
the biological organization of living systems, according to the principles of physics, and coherently
organizes and integrates structural and functional artificial elements. Consequently, virtual
environments design for augmented human involves a shift from a metaphorical, and scenario based
design, grounded on metaphysical models and rules of interaction and cognition, to a predictive
science and engineering of interaction and integration.

In this paper, following the above epistemological rationale we propose to use the
mathematical theory of integrative physiology (MTIP) (Chauvet, 1996; 2006) as a fundamental
framework for human system integration and virtual environment design (Fass 2006). For
illustrating this new paradigm an experimental protocol using graphical gesture is presented to
assess human system integration and virtual environment design carried out both in laboratory and
in weightlessness (Fass 2005a & b).

Mathematical Theory of Integrative Physiology


Mathematical theory of integrative physiology (Chauvet 1993a,b,c; 1996; 2006) couples the
hierarchical organization of structures (i.e., anatomy) and of functions (i.e. physiology), to derive
the behavior of a living system. This theory, unique in biology, introduces the principles of a
functional hierarchy based on structural organization defined by space scales, on functional
organization defined by time scales and on the structural unit (that are the anatomical elements in
the physical space). It addresses the problem of structural discontinuity by introducing elementary
functional interaction between units-sources and units-sinks, the combination of which defines
physiological function. Unlike physical interaction, functional interactions are non-symmetric at
each level of organization, leading to directed graph, and non local, leading to non local fields. They
increase the stability of a biological system by coupling two structural elements. This is a new
theoretical paradigm for knowledge-based systems and augmented human design.
MTIP is thus applicable to different space and time levels of integration in the physical space
136
of the body and to the natural or artificial behavioral environment (from the molecular level to the
socio-technical level); to drug design and wearable robotics, and to life and safety critical systems
design.

Experimental protocol
The gesture based method for virtual environment design and human system integration
assessment is a behavioral tool inspired by Chauvet’s theoretical framework, i.e.: (i) an integrated
marker for the dynamical approach of augmented human design, and the search for interaction
primitives and validation of organization principles; and (ii) an integrated marker for a dynamical
organization of VE integrative design.
By designing a virtual environment, a human in-the-loop system consists in organizing the
linkage of multimodal biological structures, sensorimotor elements at the hierarchical level of the
living body, with the artificial interactive elements of the system, devices and patterns of
stimulation. There exists a “transport” of functional interaction in the augmented space of both
physiological and artefactual units, and thus a function may be viewed as the final result of a set of
functional interactions that are hierarchically and functionally organized between the artificial and
biological systems.

References
Chauvet, G. A. (1993a). Hierarchical functional organization of formal biological systems: a dynamical approach. I.to
III. Philosophical Transaction of the Royal Society London B, 339, 425-481.
Chauvet, G. A. (1996). Theoretical Systems in Biology: Hierarchical and Functional Integration. Volume III.
Organisation and Regulation. Oxford, Pergamon.
Chauvet, G. A. (2006). A new paradigm for theory in Integrative Biology: The Principle of Auto-Associative
Stabilization. Biochemical networks and the Selection of Neuronal Groups. Journal of Integrative Neuroscience,
5(3).
Fass, D. (2005a). Graphical gesture analysis: behavioral tool for virtual environment design, Measuring behavior 2005,
5th International conference on Methods and techniques in behavioral research, 30 august to 2 september 2005,
Noldus Information Technology, Wageningen, The Netherlands, pp 233-236.
Fass, D. (2005b). Virtual environment a behavioral and countermeasure tool for assisted gesture in weightlessness:
experiments during parabolic flight. 9th European Symposium on Life Sciences Research in Space and the 26th
Annual International Gravitational Physiology Meeting, Cologne, Joint life conference “Life in space for Life on
Earth”, Cologne, Germany, 26 June- 1 July; in Proceedings pp 173.
Fass, D. (2006). Rationale for a model of human systems integration: The need of a theoretical framework. Journal of
Integrative Neuroscience, 5(3).
3rd International Conference on Enactive Interfaces (Enactive /06) 137
Learning a new ankle-hip pattern with a real-time visual feedback:
Consequences on preexisting coordination dynamics

Elise Faugloire1,2, Benoît G. Bardy1 & Thomas A. Stoffregen2


1
Motor Efficiency and Deficiency Laboratory, University Montpellier 1, France
2
School of Kinesiology, University of Minnesota, USA
elise.faugloire@univ-montp1.fr

Introduction
In daily actions, individuals use only a few patterns to coordinate the body segments.
Performing other coordination modes often requires intensive practice. The key idea of the
dynamical approach to motor learning (e.g., Zanone & Kelso, 1992) is that the learning process
interacts with these coordination tendencies: the pre-existing patterns affect and are affected by the
acquisition of a new mode. In the context of stance, two preexisting ankle-hip patterns have been
observed (e.g., Bardy et al., 1999): an in-phase mode (ankle-hip relative phase around 30°), and an
anti-phase mode (ankle-hip relative phase around 180°). We examined the consequences of learning
a new ankle-hip relative phase of 90° on the postural coordination dynamics in two complementary
tasks. In Task 1, the goal for standing participants was to track a moving target with their head: the
ankle-hip coordination emerged spontaneously, without instruction about the relative phase to
produce. In Task 2, we aimed to investigate the entire postural repertoire by asking the participants
to produce numerous specific values of ankle-hip relative phase; hence, the ankle-hip modes were
required. A real time visual feedback given by a human-machine interface allowed (i) the
participants to learn the new coordination of 90°, and (ii) to specify numerous coordinations to
produce (Task 2).

Experimental design
We investigated the ankle-hip coordination dynamics before (pre-test) and after (post-test and
retention test) learning a 90° pattern. The participants (N = 12) stood upright in front of a large
screen with electro-goniometers fixed on the left hip and ankle. Postural coordination was
characterized by the ankle-hip relative phase (φrel) and its standard deviation (SDφrel). The absolute
error AE (absolute difference between required and performed relative phases) was also computed
when a specific pattern was required. During each test session, i.e., the pre-test, the post-test and the
retention test (one week after the post-test), two tasks were performed. In Task 1 (spontaneous
coordination), the participants had to maintain a constant distance between their head and a target
oscillating virtually on the screen along the antero-posterior axis. Two target amplitudes were
tested: a small (8 cm) and a large (25 cm) target amplitudes, favoring respectively the emergence of
in-phase and anti-phase modes (e.g., Bardy et al., 1999). For Task 2 (required coordination),
participants were asked to produce 12 ankle-hip patterns (from 0° to 330°). Participants received
real-time visual feedback (Figure 1) through a Lissajous figure (ankle angle vs. hip angle) showing
the required relative phase (an elliptic shape to follow) and the actual movements performed (a
trace drawn in real time). During the learning session, participants performed 480 cycles of the 90°
pattern over 3 days, using the real time feedback (Figure 1).

Results
For the learning session, repeated measures ANOVAs revealed a significant progress in
accuracy (AE) and stability (SDφrel) with practice, Fs(11, 121) < 6.32, p < .05.
For spontaneous coordination (Task 1) at pre-test, as expected, the majority of participants
performed an in-phase mode for the small target amplitude (meanφrel = 28.81°; SDφrel = 41.18°),
and an anti-phase mode for the large target amplitude (meanφrel = 176.76°; SDφrel = 18.35°).
Learning a non-spontaneous coordination mode of 90° produced a significant change from these
initial coordination patterns toward the learned pattern, with, at post-test, a meanφrel of 99.39°
(SDφrel = 27.82°) for the small amplitude and a meanφrel of 142.73° (SDφrel = 21.88°) for the large
138
amplitude (Watson-Williams between pre-test and post-test, Fs < 61.09 , p < .05). This modification
in the direction of the learned pattern persisted one week after the post-test (i.e., at the retention test)
for the majority of participants (Watson-Williams between post-test and retention test, Fs(1, 40) <
1.06, ns).

Figure 1. Interface used for the Figure 2. Absolute error for the 12 required
learning session and Task 2. patterns (Task 2) during the 3 test sessions.

For required coordination (Task 2), the mean AE is presented in Figure 2. At pre-test,
accuracy was greater for the required patterns around anti-phase, and decreased as the required
patterns moved away from 180°. The curves for post-test and retention test revealed that learning
90° improved and homogenized accuracy for all required patterns. This improvement of AE was
confirmed by a Pattern (12) × Test (3) repeated measures ANOVA showing a significant effect of
Pattern, F(11, 121) > 14.84, p < .05, of Test, F(2, 22) > 52.26, p < 0.05, and of the Pattern × Test
interaction, F(22, 242) > 2.36, p < .05. Newman-Keuls post hoc analyses revealed that 180° was
more accurate than 8 other required patterns at pre-test, 4 at post-test and only 1 at retention test:
this pattern lost its superiority with learning 90°.

Discussion
Our results for the spontaneous postural modes are in accordance with the theoretical
predictions: as a new coordination mode was learned, its emergence competed with preexisting in-
phase and anti-phase patterns. However, for the required postural modes, the general improvement
of all tested patterns was unexpected, given the theoretical principles and previous results on bi-
manual coordination. Despite the fact that our Task 2 and our experimental design were very close
to classical bimanual studies, the increase in performance for every required coordination mode
contrasted with the destabilization effect (e.g., Zanone & Kelso, 1992) or the absence of
modification (e.g., Lee et al., 1995) observed for the bi-manual system. Our results suggest that the
consequences of learning do not have the same expression in different tasks and are not equivalent
for different effector systems.

References
Bardy, B. G., Marin, L., Stoffregen, T. A., & Bootsma, R. J. (1999). Postural coordination modes considered as
emergent phenomena. Journal of Experimental Psychology: Human Perception and Performance, 25, 1284-
1301.
Lee, T. D., Swinnen, S. P., & Verschueren, S. (1995). Relative phase alterations during bimanual skill acquisition.
Journal of Motor Behavior, 27, 263-274.
Zanone, P. G., & Kelso, J. A. S. (1992). Evolution of behavioral attractors with learning: Nonequilibrium phase
transitions. Journal of Experimental Psychology: Human Perception and Performance, 18, 403-421.
3rd International Conference on Enactive Interfaces (Enactive /06) 139
Influence of task constraints on movement and grip variability in the
development of spoon-using skill

Paula Fitzpatrick
Psychology Department, Assumption College, USA
pfitzpat@assumption.edu

Manual skills are an essential part of everyday life that develop over an extended time period
during infancy and childhood and represent a window into the development of enactive knowledge.
Factors such as perceptual acuity, the properties of the tools used, and variability in grasping
patterns that occur over time as a child plays with and uses tools all influence the development of
manual skills. Much of the research to date has focused on characterizing the various grip patterns
children use in holding a tool and describing the orientation of the tool during the task (e.g.,
Connolly & Dalgleish, 1989; Steenbergen, van der Kamp, Smithman, & Carlson, 1997). A dynamic
system theory perspective of motor coordination and its development, in contrast, emphasizes self-
organizing principles of stability, instability, and behavioral transitions to understand the emergence
and progression of skilled motor acts. In broad strokes, behavior patterns can be thought of as
collectives or attractive states of the very many component parts of the system. Under certain
circumstances, particular patterns are preferred and act as attractors in the sense that they are stable
states and are chosen more often. Other patterns are possible but more difficult to maintain and
hence are characterized by high variability. Changes in circumstances (e.g., task, environment,
intention), however, can alter the stability characteristics of the coordinative modes. Based on
predictions of dynamic system theory (Thelen, 1995; Turvey, 1990), it is expected that the
variability in grips used in a spoon-use task will decrease with age. In addition, the structure of
movement is anticipated to vary based on the task constraints. Self-directed movements are
expected to be more refined than object-directed movements (McCarty, Clifton, & Collard, 2001)
and tool characteristics are expected to influence variability.

Research methodology
Sixty children ranging in age from 2 years to 5 years were individually videotaped completing
a series of spooning tasks. One of the tasks was a self-focused task —feeding the self (eating
applesauce, pudding, or yogurt). The second task was object-focused—feeding a puppet with
imaginary food from an empty container. The third task was task-focused—scooping rice into a
bucket using three different spoons that varied in length and concavity (a plastic spoon with a
concave base, a deep scoop with a short handle, and a flat spoon with no concavity and a long
handle).
The videotapes were coded by two research assistants. Data from twenty participants was
coded by both coders and inter-rater reliability was high (r = .94). A total of 15 dependent measures
of grip, movement, and variability were coded but only six will be discussed here. One grip
measure, type of grip, coded whether the participant used a power or precision grip. Observed
frequencies were compared in a chi-square analysis. Three movement variables included time to
complete ten scoops (sec), and, for the rice scooping tasks (measured on site during data collection),
amount scooped (cups) and amount spilled (tsp). Two of the variability measures included number
of spoon orientations and spoon orientation switches (counted during performance of first 10
scoops).

Results and Discussion


Results indicate that grip, movement, and variability measures all changed as a function of
age. As seen in Figure 1, type of grip used during scooping changed with age. A chi square analysis
revealed significant differences in the observed frequencies of power versus precision grips as a
function of age-toddlers used the power grip more frequently than any of the older children, and the
precision grip was used most often by the five-year-old children (χ2 (3, n=35) = 42.42, p<.0001). In
140
examining movement measures, the younger children also took more time to perform each scooping
movement (F(3, 28 = 19.55, p < .001), scooped less rice (F(3, 50 = 3.21, p = .03), and spilled more
rice (F(3, 50 = 4.9, p = .005).
The results also revealed that more precise movements were found in self-focused and other-
focused actions when compared to task-focused actions. Additionally, variability increased as a
function of the type of implement used. For example, as seen in Figure 2, the number of spoon
orientations (F(4, 144 = 7.01, p < .0001) and the number of spoon orientation switches (F(4, 144 =
5.76, p = .0003) were larger in the task-focused activities compared to the self-focused and other-
focused activities. In addition, these measures were dependent on the spoon characteristics. The
three tasks using a typical concave spoon had the smallest number of spoon orientations and spoon
orientation switches while the flat spoon had the most.

2.5
90%
80%
2

70% 1.5 # Spoon Orientation


Toddler
60% 1
50%
3-Year-Old 0.5
Percent # Spoon Orientation
40% 4-Year-Old 0 Switches
30% 5-Year-Old Self- Puppet- Rice Rice Rice
20% feeding feeding task- task- task-flat
plastic scoop spoon
10% spoon
0% Type of Tas
Power Grip Type of Grip Precision Grip
Fi 1 Th b df i f h d ii i h ih ddl

Figure 1. The observed frequencies of the power and precision grips change with age-toddlers
use power grips more frequently and 5-year-olds use precision grips more frequently.
Figure 2. Two variability measures, number of spoon orientations and number of spoon
orientation switches, are larger in task-focused activities. These two measures also change as a
function of tool characteristics

These results support the conclusion that there is a complex interplay between perceiving
properties of a hand-held tool and adjusting movement parameters in order to effectively use a
hand-held tool. In addition, the ability to perceive these relationships and effectively execute the
motor control necessary to adjust the tool movements changes developmentally and as a function of
tool characteristics. Distinguishing between self-focused and other-focused and task-focused actions
was identified as a potentially important variable in understanding tool-using tasks. More research
is need to understand whether these differences are merely due to more experience in self-directed
tasks or a consequence of multiply-specified perceptual information and to explore how the
structure of the movements change as perceptual-motor characteristics of the hand-held implements
are varied.

References
Connolly, K., & Dalgleish, M. (1989). The emergence of tool-using skills in infancy. Developmental Psychology, 25,
894-912.
McCarty, M. E., Clifton, R. K., & Collard, R. R. (2001). The beginnings of tool use by infants and toddlers. Infancy, 2,
233-256.
Steenbergen, B., van der Kamp, J., Smithman, W. A., & Carson, C. R. (1997). Spoon handling in two-to-four-year-old
children. Ecological Psychology, 9, 113-129.
Thelen, E. (1995). Motor development: A new synthesis. American Psychologist, 50, 79-95.
Turvey, M. T. (1990). Coordination. American Psychologist, 45, 939-953.
3rd International Conference on Enactive Interfaces (Enactive /06) 141
Exploratory movement in the perception of affordances
for wheelchair locomotion

Moira B. Flanagan, Thomas A. Stoffregen & Chih-Mei Yang


Human Factors Research Laboratory, School of Kinesiology, University of Minnesota, USA
flana050@umn.edu

Introduction
We investigated the role of exploratory movement in the perception of participants’ ability to
locomote in a wheelchair. It is well established that movement facilitates perception (Gibson, 1979;
Scarantino, 2003; Stoffregen, 2000). Stoffregen, Yang and Bardy (2005) quantified movements
during stance, and showed that parameters of body movement were related to the accuracy of
judgments about participant’s maximum sitting height. In adults, stance is an over-learned activity,
as is sitting on chairs. Presumably, the use of postural activity to optimize the perception of
maximum sitting height is also expert (though not conscious). In the present study, we asked
participants to perceive affordances that were novel to them. Healthy participants sat in a
wheelchair and made judgments about the minimum height under which they could pass while
rolling in the chair. Participants had no previous experience at wheelchair locomotion and,
accordingly, we assumed that their perception of this affordance would be relatively unskilled. We
measured head and torso motion during judgment sessions. To understand the role of locomotor
experience in the perception of locomotor abilities, we gave participants varying amounts of direct
experience at wheelchair locomotion prior to judgment sessions. It was hypothesized that there
would be significant differences in body movement elicited in participants with varying amounts of
direct experience at wheelchair locomotion.

Method
Experimental trials involved participants seated in a standard wheelchair, placed 3.66 meters
from the surface of a target. The target was a window blind, which was moved at 2 cm/s, with both
ascending and descending trials. Participants were instructed to sit in a wheelchair facing the
moving blind. Judgment trials included instruction for the participant to assess the passage height
and indicate to the experimenter when there was sufficient height to pass under the blind without
ducking. Non-judgment trials involved the participant just sitting and watching the window blind
movement. Each participant completed 12 alternating sets of judgment and non-judgment trials.

Prior to the experimental trials, participants were placed in one of three groups, which
involved different levels of novel movement exposure using a wheelchair. One group received no
wheelchair practice prior to experimental trials [Group 1, N=12]. The second group of participants
practiced moving themselves in a wheelchair twice along a 25 meter hallway, before beginning the
experimental trials [Group 2, N=12]. The third group received additional wheelchair exposure in
addition to the same hallway practice as the second group of participants [Group 3, N=13]. This
addition exposure involved participants practicing moving themselves in the wheelchair in the
specific environment within which they would later be tested during the experimental trials.
Participants in Group 3 were instructed to practice moving under this moving window blind four
times, with the blind placed at differing heights, prior to the experimental trials.

An electromagnetic tracking system was used to collect kinematic data. One receiver was
placed on the torso, using cloth medical tape. A second receiver was placed on top of the head,
using a headband. The transmitter was located behind the subject’s head, on a stand. Torso and head
position data were collected from each receiver and stored for later analysis.
142
Results and Discussion
A mixed repeated measures ANOVA was conducted upon head and torso data. Within
subjects factors included trials (12) and judgment (2), with exposure group (3) as the between
subject factor. Preliminary analyses revealed significant effects of all three variables upon body
movement measures. There was a significant main effect of trials, with decreased torso movement
being revealed within later trials as compared to earlier trials. There was also a significant main
effect of judgment, with decreased head movement elicited during judgment as compared to non-
judgment trials. Exposure group elicited similar main effects, with significant differences found in
measures of body movement between the three exposure groups, with the least amount of
movement found with participants receiving the greatest amount of wheelchair movement exposure.

The findings of a significant reduction in head movement within the judgment as compared to
the inter-judgment intervals strongly supported previous findings of Stoffregen, et al (2005). In
addition to replicating numerous studies showing an effect of learning (trials) as well as attention
(judgment task) upon body movement (Anderson, et al, 2002; Balasubramaniam & Wing, 2002;
Stoffregen, et al, 2005), we also found support for our main hypothesis, with a significant influence
of wheelchair locomotion experience upon head and torso movement during judgment sessions.

Similar to Stoffregen et al (2005), while we found significantly lower levels of head


movement during judgment as compared to inter-judgment intervals in participants with greater
levels of wheelchair movement exposure, a similar type of response was not elicited in participants
without prior wheelchair movement exposure. This suggests that the lack of prior wheelchair
movement exposure may necessitate greater amounts of exploration type movement during an
interval in which a judgment must be made. In contrast, prior wheelchair movement exposure
provided the other group of participants with affordance learning, rendering this type of exploratory
movement during judgment intervals not essential for this experimental group of participants.

References
Andersson, G., Hagman, J., Talianzadeh, R., Svedberg, A., & Larsen, H. C. (2002). Effect of cognitive load on postural
control. Brain Research Bulletin, 58(1), 135–139.
Balasubramaniam, R., & Wing, A. M. (2002). The dynamics of standing balance. Trends in Cognitive Sciences, 6(12),
531-537.
Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin.
Scarantino, A. (2003). Affordances explained. Philosophy of Science, 70, 949–961.
Stoffregen, T. (2000). Affordances and Events. Ecological Psychology, 12, 1–28.
Stoffregen, T. A., Yang, C. M., & Bardy, B. G. (2005). Affordance judgments and nonlocomotor body movement.
Ecological Psychology, 17, 75-104.
3rd International Conference on Enactive Interfaces (Enactive /06) 143
Posture and the amplitude of eye movements
Marc Russell Giveans1, Thomas Stoffregen1 & Benoît G. Bardy2
1
Human Factors Research Laboratory, University of Minnesota, USA,
2
Motor Efficiency and Deficiency Laboratory, University Montpellier 1, France
givea017@umn.edu

Introduction
Previous research has documented integration of postural activity with the performance of
supra-postural tasks that require perceptual contact with the environment (e.g., Stoffregen et al.,
1999, 2000, 2006a). Stoffregen et al. (2006b) showed that the variability of head and torso position
are reduced when participants shift their gaze to follow a visual target that oscillated in the
horizontal plane), relative to fixation of stationary targets. In their study, the amplitude of target
oscillations was fixed at 11° of horizontal visual angle. This amplitude was chosen as being within
the range in which shifts in gaze are achieved solely via eye rotations, without supporting rotation
of the head. Typically, eye movements are supplemented with head rotation for gaze shifts greater
than 15° of horizontal visual angle (Hallet, 1986). In the present study, we varied the amplitude of
target oscillations, including amplitudes that typically do and not not elicit supporting head
rotations. We sought to determine whether the variability of head and torso motion would be
functionally related to the visual task with and without the presence of functional head rotations for
larger target amplitudes.

Method
Twelve University of Minnesota undergraduates participated for course credit. All had normal
or corrected-to-normal vision. Eye position and movement were measured using electro-
oculography, or EOG (Biopac, Inc). Eye position was sampled at 62.5 Hz. Postural data (head and
torso motion) were measured using a magnetic tracking system (Fastrak, Polhemus, Inc), with each
receiver sampled at 60 Hz. One receiver was attached to a bicycle helmet worn by the participant,
while a second receiver was attached to the skin between the shoulder blades. Visual stimuli were
generated using PsyScope (Cohen et al., 1993), and presented on a Macintosh G3 computer with a
43 cm Apple Studio Display.

Participants were asked to stand approximately 86 cm from the display, with their feet
together. Participants were given the option of letting their arms hang to their sides, putting their
hands in their pockets, or holding their hands in front of or behind them. Whichever position they
chose, they were asked to maintain it for the duration of each trial, and not to move their hands
during trials. There were three conditions of target motion, one in which the target was stationary,
and two in which the target was displaced laterally (+/- 4.5° visual angle and +/- 12° visual angle).
In the latter two conditions, the target was presented in two positions, alternating between positions
(at a frequency of 0.5 Hz) to produce apparent motion. In each condition, the visual target consisted
of a filled red circle on a white background. Four trials were conducted in each condition, with each
trial lasting 65 seconds. For the two conditions in which the target moved, participants were asked
to shift their gaze so that they were always looking at the target’s current position, but not to
anticipate motion of the target. Participants were not asked to move their eyes, per se, but to shift
their gaze, and were instructed to do this as naturally as possible. Participants were invited to sit
down between trials, and were required to sit for at least 30 s following the sixth trial.

Results
Separate ANOVAS revealed a significant effect of visual condition on motion of the head and
torso (Figure 1a and 1b). Post-hoc t-tests revealed that variability of head and torso position in the
A-P axis was significantly decreased for both the 9° visual angle and 24° visual angle conditions as
compared to the stationary target condition (each p < .05). Analyses of data on eye movements and
144
head rotation are ongoing.

Figure 1. A. Effect of conditions on variability of torso motion in the AP axis. 1: stationary. 2:


9°. 3: 24°. B. Effect of conditions on variability of head motion in the AP axis. 1: stationary. 2:
9°. 3: 24°.

Discussion
The results indicate that postural motion was reduced in conditions in which participants
shifted their gaze, relative to the condition in which gaze was stationary. This was true both for
amplitudes of target motion that typically elicit supporting head movements, and for those that do
not. Further analysis will reveal whether participants used both head and eye movements when
shifting gaze.

References
Stoffregen, T. A., Hove, P., Bardy, B. G., Riley, M. A., & Bonnet, C. T. (2006a). Postural stabilization of perceptual but
not cognitive performance. Journal of Motor Behavior, in press.
Stoffregen, T. A., Bardy, B. G., Bonnet, C. T., & Pagulayan, R. J. (2006b). Postural stabilization of visually guided eye
movements. Ecological Psychology, in press.
Stoffregen, T. A., Pagulayan, R. J., Bardy, B. G., & Hettinger, L. J. (2000). Modulating postural control to facilitate
visual performance. Human Movement Science, 19, 203-220.
Stoffregen, T. A., Smart, L. J., Bardy, B. G., & Pagulayan, R. J. (1999). Postural stabilization of looking. Journal of
Experimental Psychology: Human Perception and Performance, 25, 1641-1658.
3rd International Conference on Enactive Interfaces (Enactive /06) 145
Motor learning and perception in reaching movements of flexible objects with
high and low correlations between visual and haptic feedbacks

Igor Goncharenko1, Mikhail Svinin2, Yutaka Kanou1 & Shigeyuki Hosoe2


1
3D Inc., 1-1 Sakaecho, Kanagawa-ku, Japan
2
Bio-Mimetic Control Research Center, RIKEN, Anagahora, Japan
igor@ddd.co.jp

Hand velocity profiles collected with a haptic-based system demonstrate that, like in the task
of motion in free space, constrained rest-to-rest motion of flexible object can be predicted well by
the minimum hand jerk criterion. For the first experimental task with low object flexibility,
temporal growth of motor learning estimated as a trial success rate was significant. In the second
task with bimodal hand velocities and high object flexibility, all subjects have shortly caught the
object control strategy, but the initially achieved low success rate remains almost constant during
the rest of experimental series.

Flexible object motion model


In computational neuroscience the most common model of rest-to-rest motion in free space is
so-called minimum hand jerk criterion (MJC). In the optimization approach, the performance index
T
J = ∫ ( d n x / dt n ) dt is minimized over the movement time T, giving constrained and bell-shaped
2

hand velocity profiles. Here, x is the hand contact point vector, n=3, and boundary conditions with
zero first and second derivatives of x describe rest-to-rest sates. It was theoretically shown (Svinin
et al. (2006)) that MJC (unlike the minimum crackle criterion, MCC, n=5) yields constrained hand
velocities for multimass dynamic systems. Flexible object is modelled as N masses, connected by
springs (Fig.1). External force fh is applied to the driving mass (large black sphere) and the masses
can virtually penetrate to each other. The following rest-to-rest motion task is formulated for
subjects: initially, all the masses are coincided at rest at the start point (small left sphere), during
given time T the subject moves the system to the target point (small right sphere), and finally, all the
masses should be coincided at rest at the target point. Adopting equations of multimass object
dynamics (Goncharenko et al. (2006)) for the case of motion along a horizontal line for N equal
masses connected by springs of stiffness k, we have 2N ordinary differential equations:
dxi dvi
= vi , m = fi , 1 ≤ i ≤ N ; f i = k ( xi −1 − 2 xi + xi +1 ), 1 < i < N , f1 = k ( x2 − x1 ) + f h , f N = k ( x N −1 − x N ).
dt dt

Figure 1. Flexible object model.

Experimental results
A PC-based system was implemented to simulate the above model in real time. A PHANToM
manipulator was used to measure hand position/velocity and to supply feedback force fh. at a haptic
rendering rate of 1kHz. When a subject catches the driving mass with PHANToM’s stylus, haptic
interaction starts. To count successful trials, tolerances of target positions, velocities, accelerations,
and reaching time were introduced. If a trial is successful, haptic interaction is stopped and an audio
signal prompts the subject to proceed with the next trial. Five naïve right-handed subjects
participated in two experiments (E1, E2) with unimodal and bimodal hand velocity profiles
predicted by MJC and MCC.
Task E1. The following configuration was used: N=5, m=0.6kg, k=600N/m, travelling
distance L=0.2m, reaching time T=2.25±0.5s. Position and velocity tolerances for all masses were
146
set to Δx=±0.006m, and Δv=±0.006m/s. Each subject completed 300 trials in 3 successive days: 100
trials on Day 1 for preliminary system learning, and 200 recorded trials equally split in Days 2 and
3. Even being unfamiliar with this unusual environment, all the subjects demonstrated the success
rate about 10% on Day 1. On Day 2 the ratio varied from 32% to 46%, and on Day 3 – from 55% to
93%. Last ten successful profiles from each subject on Day 3 are shown by grey lines in Fig.2 (left).
Integrated RMS measure of curve matching for all successful trials gives us 0.0216m/s and
0.0596m/s for MJC and MCC, respectively. Note, that visual and haptic feedbacks are highly
correlated for task E1: the difference between hand and last mass positions is relatively small
during the whole time T. In this case both, hand and last mass velocity profiles are unimodal and
centered by MJC and MCC.
Task E2. The system was configured as follows: N=10, m=0.3 kg, k=100 N/m, L=0.2m,
T=2.35±0.4s, Δx=±0.012m, and Δv=±0.024m/s. Additionally for the hand, the start and end point
position and velocity tolerances were set as Δxh = ±0.012m and Δvh = ±0.05m/s. The system
configured for E2 exhibits “high flexibility”: control strategy with bimodal hand velocity profile
predicted by MJC and MCC (solid and dashed black lines in Figure 2, right) gives a significant
difference of hand and last mass positions during the movement. Note, that MJC and MCC
theoretical last mass velocity profiles are unimodal and centered for E2. Each subject completed
1200 trials in two days and the success rate growth was small – between 9% and 17% on the first
day, and 11% and 19% on the second day, while the level of 10% was achieved already after
approximately 100 first trials. The experimental data of the successful trials are strongly in favour
of MJC versus MCC (Figure 2, right) with integrated RMS of 0.0260m/s and 0.0524m/s.

vh vh
0.25
0.2
0.2

0.15 0.15

0.1 0.1

0.05 0.05

0.5 1 1.5 2
t 0.5 1 1.5 2
t
Figure 2. Hand velocity profiles for tasks E1(left), and E2(right).

Conclusions and Discussion


The difference between MJC and MCC is fundamental: MJC implies that the motion planning
is done in the task space of hand coordinates, while MCC gives the control strategy in object
coordinate space. Our results demonstrates the best matching of experimental data in accordance
with MJC, which gives lower magnitudes of the hand velocities. However, when the haptic and
visual feedbacks are coupled, the correlation of the feedbacks is important for the progress in motor
learning. The subjects described this situation as they tried to move the hand-object system as the
whole, maintaining the “lowest object flexibility” for the task E1, while for the task E2 they had to
visually switch their attention from the “tail” of the system to the haptic proxy (virtual hand
position). It may explain the low and stable success rate for E1 and a significant trend of the motor
learning progress for E1.

References
Goncharenko, I., Svinin, M., Kanou, Y., & Hosoe, S. (2006). Motor Training in the Manipulation of Flexible Objects in
Haptic Environments. CyberPsychology & Behavior, 9(2), 171-174.
Svinin, M., Goncharenko, I., Luo, Z.-W., & Hosoe, S. (2006). Reaching Movements in Dynamic Environments: How
Do We Move Flexible Objects? IEEE Transactions on Robotics, 22(4), 724-739.
3rd International Conference on Enactive Interfaces (Enactive /06) 147
Kinesthetic steering cues for road vehicle handling
under emergency conditions

Arthur J. Grunwald
Faculty of Aerospace Engineering, Technion, Israel Institute of Technology, Israel
grunwald@tx.technion.ac.il

In a drive-by-wire (DBW) system the angle of the front wheels of the vehicle is set by a
control system. Driver inputs to this system originate from a spring-loaded steering wheel or control
stick. Since the conventional mechanical link between the steering wheel and angles of the front
wheels no longer exists, the self-aligning torque, as a feedback channel, is unavailable to the driver.
This paper outlines the basic idea of an active kinesthetic control manipulator for road vehicles in
DBW systems. Its purpose is to enhance the handling qualities of the vehicle and to improve
emergency handling. The manipulator, driven by a DC torque motor, should furnish the driver with
kinesthetic information, which will contribute in particular in emergency situations, such as a fully-
developed 4-wheel skid. Computer simulations of this concept have proved promising. A driver-in-
the-loop simulator study is planned in order to experimentally validate the concept.

Drive-by-wire systems with force feedback


In our proposed system, the conventional steering column is replaced by two torque motors,
mounted one above the other. The upper torque motor is connected to the steering wheel and creates
the feedback torque felt by the driver, while the lower torque motor sets the steering angle of the
front wheels. This detachment serves two purposes: (1) steering wheel commands can be issued,
independent of the steering angles of the front wheels; and (2) a torque on the steering wheel can be
generated, independent of the self-aligning torque of the front wheels. With the DBW system, the
angle of the steering wheel does no longer set the steering angle of the front wheels. Instead, it
commands a desired state, such as the yaw rate or path-angle rate of the vehicle. The lower torque
motor executes the commands computed by the DBW system, realizing the desired state. The upper
torque motor in our system is used to communicate a feedback torque to the driver. In the usual
force feedback approach this torque is set proportional to relevant actual vehicle states such as the
turn rate, thus attempting to restore a sense of ‘natural feel’. Both the conventional vehicle and the
usual force-feedback drive-by-wire system will fail in a four-wheel skid: for the conventional
vehicle the self-aligning torque felt by the driver will be zero or erratic, whereas for the drive-by-
wire system the torque fed back to the steering wheel will be proportional to the erratic yaw-rate of
the spinning vehicle.

Drive-by-wire systems with kinesthetic feedback


The novelty of our approach is to employ kinesthetic feedback rather than force feedback in
the DBW system. The use of kinesthetic feedback in flight control has been studied extensively in
the past, Merhav and Ben Ya’acov (1976). It basically differs from force feedback in the sense that
a high-gain servo loop is used to set the steering wheel angle to a value, derived from the vehicle
states. This angle will inform the driver, through the kinesthetic cues, of relevant vehicle states. The
essence and novelty of the presented method is to set the steering wheel angle in accordance with
the predicted vehicle position. The approach is equally valid for normal, as well as for emergency
handling. The system should enable drivers, unskilled in recovery from skidding on icy or wet
roads, to remain in partial control over their vehicles during skidding. Furthermore, the system
might prevent the vehicle from entering a fully developed four-wheel skid and could be seen as the
“steering equivalent” of the anti-brake locking system.

The predicted vehicle path as kinesthetic cue


It has been shown in earlier work that the predicted vehicle path is an essential cue in
controlling a road vehicle, Grunwald and Merhav (1976). This cue is derived from the ‘optical flow
148
pattern’ or streamers, inherently existing in the visual field as seen through the windshield of a
moving vehicle. In a steady turn the circular vehicle path emerges from the pattern as the central
streamer. The driver is able to estimate this streamer quite accurately, Grunwald and Kohn (1993).
The basic underlying principle of our system is to use the upper torque motor to set the steering
wheel angle according to the predicted vehicle position. However, in order to bring the vehicle to a
desired different future position, the steering wheel should be rotated. It is shown that the torque
applied by the driver to rotate the steering wheel, is proportional to the rate-of-turn, or the lateral
acceleration. This kinesthetic feedback system thus mimics the intuitive way objects are handled in
nature. Newton’s law states that a force, applied perpendicular to the direction of motion of an
object, causes its path to become circular. This situation is replicated by the kinesthetic feedback
system: the torque, exerted on the steering wheel, causes the vehicle to follow a curved path.

Kinesthetic cues under normal and emergency conditions


Even under normal driving conditions, kinesthetic feedback has clear advantages over the
conventional vehicle. In the case of a sudden side wind or a lateral road surface tilt, the vehicle will
continue to move along a straight line, but the driver will experience these disturbances as sudden
torque on the steering wheel. This torque constitutes a proprioceptive cue to the appearance of the
disturbances. However, the true merits of the method emerge in emergency handling: (1) the vehicle
will recover from the skid, regardless of driver intervention; (2) the forced motion of the steering
wheel and the inability of the driver to rotate the steering wheel in the direction of the desired future
position, notify the driver of the physical limitations of the planned maneuver.

References
Grunwald, A. J., & Kohn, S. (1993). Flight-Path Estimation in Passive Low-Altitude Flight by Visual Cues. AIAA
Journal of Guidance, Control and Dynamics, 16, 363-370.
Grunwald, A. J., & Merhav, S. J. (1976). Vehicular Control by Visual Field Cues - Analytical Model and Experimental
Validation. IEEE Transactions on Systems, Man, and Cybernetics, 6, 835-845.
Merhav, S. J., & Ben Ya’acov, O. (1976). Control Augmentation and Work Load Reduction by Kinesthetic Information
from the Manipulator. IEEE Transactions on Systems, Man, and Cybernetics, 6, 825-835.
3rd International Conference on Enactive Interfaces (Enactive /06) 149
Creating prototypes of prospective user activities and interactions through
acting by design team and users

Kimitake Hasuike, Eriko Tamaru & Mikio Tozaki


Human Interface Design Development, Fuji Xerox Co., Ltd, Japan
kimitake.hasuike@fujixerox.co.jp

Introduction
In this paper, we describe an outline of our methods and practices to design and visualize
prospective user activities and interactions rapidly from the early phase of the design process. In
designing artifacts, text scenarios and scene sketches have been widely used to visualize prospective
user activities from early phases of the design process. But, these methods have limitations for
discussing about active and intuitive interactions. We think the keys for that are simple mockups
and acting of future user activities in actual workplaces. These enable a design team to design and
visualize realistic prospective situation and suitable interactions.

Need for prototyping prospective user activities and interactions


In innovative projects, it is important to visualize target user's activities and interactions that
will be enabled with the new technology and their application. Because the design team have to
realise, at the early phase of the project, that the activities and interactions aimed are better than
those currently done. So we need methods for visualizing and creating prototypes of prospective
user's activities and interactions even before the technologies actualize those (Buchenau & Suri,
2000).
Designers have been using some methods for this purpose. Text scenarios, storyboards and
scene sketches are typical examples. But, these methods have some limitations for discussing future
interactions. For example, text scenario is difficult to describe intuitive images, and sketches require
particular skills for representation and are difficult to draw cooperatively. So some project members
cannot participate to this visualization activity using such traditional ways. So we need another
approach that project members can intuitively visualize interactions and discuss what would be
natural and ideal for target situation and users (Prabhu, Frohlich, & Greving, 2004).

Creating prototypes of future interactions through acting


Our method uses skit acting with simple mock-ups to discuss and visualize future scenes,
activities and interactions. And it uses photos of these skits, instead of designer's handwritten scene
sketches, as resources for communication about the possible futures. We call this technique “Scene
Photo” method. In this method, designers and technology researchers act out some future activities
and interactions in actual environment.
They act skits using mockups and actual existing objects at the target workplace. They are
thinking about features of the supposed technologies, and ways of interactions with those, while
acting. In this approach, even non-designers can fully participate to the visualization process, and
all project members can discuss what would be natural and ideal for the scene.
In our company, we conducted some sessions using this method in an electric paper project. In
these sessions, designers and technology researchers acted out some activities in target scenes, and
studied suitable usages and intuitive interactions with simple mockups (Figure 1).

Involving users into the interaction design process


Users organize their practical activities using various resources in their workplaces. These
practices are sometimes beyond the designer's assumptions. Artifacts are sometimes used for aims
other than those that designers have assumed. They are also sometimes treated in unexpected ways.
Therefore, in the design of a new system, the design team should consider the broad range of
activities that would exist in the target workplaces (Kuhn, 1996) (Bannon & Bødker, 1991).
150

Figure 1. Photos of Actings about Future Scenes in the Target Workplaces

In our method, we aimed to connect research of current user activity with future visualization
smoothly. It is important to confirm how the technology would be used and what type of
interactions would be fit into the environment in the future. But it is not easy to know the answer.
Users are not always aware of their current activities, and real needs (Mattelaki & Battarbee, 2002).
We prepared three phases in our method. At the first phase, we conduct an on-site interview
for hearing about user's actual everyday life and their work. In this phase, we learn from users about
their jobs, colleague, customers, workplace, tools, information, and so on. After that, at the second
phase, we select some typical scenes from the interview. Then we ask the participant to reproduce
the arrangement of the tools at their workplace, and to act out some of their activity retrospectively.
In this phase, we learn from users how they organize artifacts for their activity at each scene, and
what type of interactions they usually do. At the third phase, just after the retrospective acting for
each scene, we hand the user simple mockups of the application of the target technology. Then we
explain the outline of the technology to the user, and ask them to act with the mockups with
thinking about their future activity (Hasuike, Tamaru & Tozaki, 2005).
We conducted a series of sessions using this method for multiple participants. In those trial,
we used an hour for the interview, and another an hour for acting out. We were able to get outlines
of participant's actual work, and their conception about future activities and interactions of two or
three typical work scene.

Conclusion
We proposed methods for prototyping prospective user activities and interactions intuitively.
And we are also thinking about connecting researches of current user activities and interactions with
design of future interactions smoothly. Through the case studies, we were able to confirm that our
methodology is fundamentally effective for designing more intuitive and suitable interactions for
users activities and their workplaces.

References
Bannon, L. J., & Bødker, S. (1991). Beyond the Interface: Encountering Artifacts in Use. In J. Carroll (ed.) Designing
Interaction: Psychology at the human-computer interface. Cambridge: Cambridge University Press. Chapter 12.
Buchenau, M., & Suri, J. F. (2000). Experience Prototyping, Proceedings of Designing Interactive Systems (DIS’00),
pp.424-433.
Hasuike, K., Tamaru, E., & Tozaki, M. (2005). Methods for Visualizing Prospective User Activities and Interactions
through Multidisciplinary Collaboration. Proceedings of International Association of Societies of Design
Research (IASDR 2005).
Kuhn, S. (1996). Design for People at Work. In Winograd, T. (ed.), Bringing Design to Software. NY, ACM Press:
pp.223-289.
Mattelaki, T., & Battarbee, K. (2002). Empathy Probes. Proceedings of the Participatory Design Conference (PDC’02).
Prabhu, G., Frohlich, D. M., & Greving, W. (2004). Contextual Invention: A Multi-disciplinary Approach to Develop
Business Opportunities and Design Solutions. Design Research Society 2004 International Conference.
3rd International Conference on Enactive Interfaces (Enactive /06) 151
A couple of robots that juggle balls:
The complementary mechanisms for dynamic coordination

Hiroaki Hirai & Fumio Miyazaki


Department of Mechanical Science and Bioengineering, Graduate School of Engineering Science,
Osaka University, Japan
hirai@robotics.me.es.osaka-u.ac.jp

The skill of juggling has been an interesting field in the investigation of motor control for
neuroscientists, psychologists, and robotic researchers. This complex rhythmic task involves some
motor skills such as sensory-motor coordination, motor learning, inter-limb coordination and so on.
To unravel the intrinsic mechanisms underlying such abilities, we have developed a couple of
robots that perform a juggling-like ball passing task (Figure 1). Each robot consists of the same
configuration. The linear actuator of each robot drives a paddle back and forth and the paddle hits
two balls at the appropriate timing. A touch sensor is attached to each robot's paddle, and the time
stamp of the ball's contact with the robot's paddle and its partner's paddle is detectable. In this
presentation, we introduce a biologically inspired hierarchical architecture for rhythmic
coordination between robots. The proposed robot architecture consists of three mechanisms, which
are an active control mechanism and two passive-control mechanisms. These mechanisms are
complementary and the synergy of different mechanisms leads to the stable performance of the
successful task. This talk especially focuses on two passive-control mechanisms, which include (1)
open-loop stability of mechano-physical system and (2) entrainment between mechano-physical
system and neuron-like pattern generating system. These passive-control mechanisms lead to the
emergence of self-organized temporal structure in the whole system and realize the stable
coordinated motion between robots.

Figure 2 displays the video sequences of a stable motion pattern in robot juggling/passing. In
the early part of the video sequences (indexed 1-8), two robots coordinate their motions and keep
hitting two balls successfully. At the moment of the index 9, to show the adaptability of our robots,
we intentionally disturbed a ball's motion so that the robots fail the task.

Figure 1. Perceptual-motor system for dynamic coordination


152

Figure 2. Dynamic pattern emerging in the two-ball-passing task

The difference between the two ball's phases changed from out of phase to almost in phase.
However, after the trial-and-error process (indexed 10-34), both robots autonomously recover the
original stable motion by adjusting the timings of the paddle movement to each other. In the later
part of the task (indexed 35-40), two robots keep hitting two balls again in the original coordinated
pattern. This coordinated motion between robots, which is the selforganized temporal pattern of the
whole system, emerges from the interaction between the robot and the environment. Then, each
robot can be regarded as one large oscillating object, and the robotic oscillating objects are mutually
pulse-coupled through the balls. The discrete rhythm information about motion is mutually
transmitted between the two robots through the environment (a ball), permitting the robots to self-
organize their motion through discrete coupling. As a result, the robots can recover and perform a
stable coordinated motion even if the difference between phases of the ball's motion varies owing to
disturbances. This dynamics based approach is useful not only for adaptive motion between robots
but also for emergence of the coordinated motion.

References
Hirai, H., & Miyazaki, F. (2006). Emergence of a dynamic pattern in wall-bouncing juggling: self-organized timing
selection for rhythmic movement. Robotica, 24, 273-293.
Hirai, H., & Miyazaki, F. (2006). Dynamic Coordination Between Robots: The Selforganized Timing Selection in a
Juggling-like Ball Passing Task. IEEE Transaction on Systems, Man and Cybernetics – PART B, in press.
Ronsse, R., Lefèvre, P., & Sepulchre, R. (2004). Open-loop stabilization of 2D impact juggling. 6th IFAC Symposium
Nonlinear Control Systems, pp. 1157-1162. 1-3 September, Stuttgart, Germany.
Sternad, D., Duarte, M., Katsumura, H., & Schaal, S. (2000). Dynamics of a Bouncing Ball in Human Performance.
Physical Review E, 63, 011902, 1-8.
Williamson, M. (1998). Neural Control of Rhythmic Arm Movements. Neural Networks, 11, 1379-1394.
3rd International Conference on Enactive Interfaces (Enactive /06) 153
Perception of multiple moving objects through multisensory-haptic interaction:
Is haptic so evident for physical object perception?
Maxime Houot1, Annie Luciani1 & Jean-Loup Florens2
1
ICA Laboratory, INPG-Grenoble, France
2
ACROE, Grenoble, France
maxime.houot@imag.fr

This paper deals with the perception of objects numerosity from sounds and/or images and/or
haptics. We used Virtual Reality to realise a Virtual Pebble Box: the experimental context
corresponds to a box containing hard mobile objects we can handle with a rigid stick. The state of
the art deals with the perception of stimuli numerosity, whereas we wanted to study the perception
of objects numerosity from their stimuli. For example, Koesling et al. (2004) show that the linking
of dots by line segments leads to an underestimation of their number. Allik and Tuulmets (1991)
propose a similar result: dots area influences dots numerosity judgments. Objects identification is
not broached, except by Essl et al. (2005). They show that participants estimate fairly well the
number of solid objects with which they interact, and the control they have in a noisy interaction.
From these results, we wanted first to isolate the modality that allows to infer the kind of
environment explored, and to cover the scale of confinement (numerosity being related to a
confined space) to study the perception of several. But these two experimentations seemed
premature compared to the state of the art. Consequently, a preliminary upstream experimentation is
necessary, particularly with seen of very surprising reactions from subjects manipulating two
different implementations of Virtual Pebble Box in an explorative test session performed at INPG in
collaboration with the University of Lund (2005). In that aim, a not quantifying experimentation has
been set-up allowing us to analyse participants comments on what they are experiencing while
manipulating a Virtual Pebble Box with a high fidelity haptic ©ERGOS Technologies device
(www.ergostechnologies.com).
The results leads to a surprising conclusion: the perception of mobile objects, with haptics
only and in multisensory situation, is not so trivial, and exhibits strong stable surprising paradoxes.
It allows us to ask new questions on the multisensory perception of moving objects.
Method
The 2D Virtual Pebble Box used is composed by a circular box containing 8 mobile masses
more or less rigid, in interaction of collisions more or less visco-elastic and with 1 more or less rigid
mass (the manipulator) controlled by the ERGOS device. 6 participants manipulated 54 different
models of virtual Pebble Box: 9 physical parameters sets for the Pebble Box containing always 8
masses * 6 different sensory feedbacks.
The set of 9 physical parameters was:
• a combination of 3 different interactions: (1) Hard collisions, small viscosity; (2) hard
collisions, medium viscosity; (3) soft collisions, medium viscosity;
• with 3 different size of masses: (1) 8 big masses, 1 small manipulator; (2) 8 medium
masses, 1 medium manipulator; (3) 8 small masses, 1 big manipulator.
The 6 sensory feedbacks were: (1) only haptic, (2) haptic + audio, (3) haptic + “ball-like
visual rendering”, (4) haptic + “continuous matter - like visual rendering”, (5) haptic + “ball-like
visual rendering” + audio, (6) haptic + “continuous matter - like visual rendering” + audio. Figure 1
shows the same Pebble Box with the two different visual rendering: “ball-like visual rendering” on
the left and “continuous matter - like visual rendering” on the right. In this figure, the Pebble Box is
composed of 8 big masses (represented in yellow on the left), and one small mass for the
manipulator (represented in green on the left). The participants manipulated the Virtual Pebble Box
as long as they want. They were invited to comment freely the scene they identified, with their own
vocabulary. The experimentation was filmed.
154

Figure 1. Left: “ball-like visual rendering”. Right: “continuous matter - like visual rendering”.

Results
The perception for the 9 different Virtual Pebble Box with only the haptic feedback, leads
mainly the participants to conclude either to the perception of only one object (among 8) or to a
kind of continuous environment, according to the value of the physical parameters. That is the
biggest paradox, because the scene is objectively composed neither by only one object, nor by a
continuous matter. The following examples illustrate this paradox:
a. Example 1:
Description: 8 big and hard masses; one small and hard manipulator.
Result: With only the haptic, the 6 participants detected only 1 object among 8.
b. Example 2:
Description: 8 medium and soft masses; one medium and soft manipulator.
Result: With only the haptic, the 6 participants perceived a continuous environment composed of a
single matter.
When other modalities are added, the paradox is attenuated but it does not change
fundamentally. These results are even more surprising as 4 participants knew perfectly well the
scene, as they were their designers: they concluded completely differently from what they perfectly
knew in details objectively.
Main conclusions and perspectives
1. The analysis of the results shows that the estimation of the number of objects is not a
spontaneous focus when we manipulate a Virtual Pebble Box. 4% of the manipulations lead to a
numeration.
2. The difference between “One / Single” and “Several” is not so easy: About 6% of the
manipulations lead to a clear perception of several objects.
3. The judgement of the kind of environment (one object vs. continuous environment) seems
to be a more natural task. But what are the parameters that influence this perception? As shown in
Example 2, the elasticity of the collisions seems to play an important part.
With seen these results, we may ask ourselves if haptic is so evident in the perception of
moving material objects. It is a surprising discovery opening new issues in the cognitive field. In
addition, multisensory returns do not change radically the conclusions. In the best of the cases, the
visual rendering reduces the paradox.
Several prospects are suggested from these observations:
1. Why do participants identify only 1 object between 8 identical (as in Example 1)? Are there
minimal sensory-haptic conditions allowing us to infer a given moving object?
2. Comparing the models leading to the perception of a continuous environment with models of
continuous environment could help us to understand the phenomena of example 2.
References
Essl, G., Magnusson, C., Erikson, J., & O’Modhrain, S. (2005). Towards evaluation of performance, control, and
preference in physical and virtual sensorimotor integration. 2nd International Conference on Enactive Interfaces,
17-18 November, Genoa, Italy.
Jallik, J., & Tuulmets, T. (1991). Occupancy model of perceived numerosity. Perception & Psychophysics, 49(4), 303-
314.
Koesling, H., Carbone, E., Pomplun, M., Sicchelschmidt, L., & Ritter, H. (2004). When more seems less - non-spatial
clustering in numerosity estimation. Early Cognitive Vision Workshop ECOVISION 2004, 28 May 1 Juin, Isle
of Skye, UK.
3rd International Conference on Enactive Interfaces (Enactive /06) 155
SHAKE – Sensor Hardware Accessory for Kinesthetic Expression

Stephen Hughes1 & Sile O’Modhrain2


1
SAMH Engineering Services, Ireland
2
Sonic Arts Research Centre, Queens University, Ireland
sile@qub.ac.uk

Interaction designers are afforded a wealth of software development tools for both mobile
phones and pocket PCs but are poorly served by the sensing capabilities found in such devices.
Input and output modalities on the majority of mobile devices are still limited to buttons and voice
for input and visual, audio and vibration for outputs. SHAKE attempts to address some of these
shortcomings by providing an integrated package containing inertial sensing, magnetic sensing and
a relatively high-fidelity vibrotactile display in a self-powered unit which connects via Bluetooth to
any Bluetooth-enabled device. It is intended as a prototyping tool for gesture-based interfaces and
applications.

The approach we have taken in developing SHAKE is to provide a means of both sensing the
movement of the device to which the SHAKE is attached using inertial sensors and positioning the
unit with respect to the world using a magnetic compass. Finally we provide a vibrotactile display
to allow for display of information directly to the hand performing the gesture. Inertial sensing is
still uncommon in mass market mobile computing devices, yet this form of sensing offers designers
of applications and games an enormous benefit by providing access to a very rich mode of
interaction – gesture. Magnetic compass sensing is also lacking on current mobile devices and this
can be of great use in navigation and for rotating the view in large information spaces. Finally, the
quality of vibration feedback is often limited to a very rudimentary output profile even though our
tactile senses can discern a large bandwidth of information through this modality.

As well as functioning as a self-contained unit, SHAKE also includes the capability to plug in
additional sensors. This allows the unit to be integrated into more specific applications where
additional signals might be required, e.g. heartrate monitoring, capacitive sensing, and so on.

In summary, the primary goal in developing SHAKE was to enable designers of interactive
systems to quickly try out new interaction paradigms without having to develop custom hardware. It
is hoped that designers will be able to quickly integrate the SHAKE’s sensors by mounting the unit
on whatever device they are working with (phone, musical instrument, etc.) and integrating its
sensor data into their applications by parsing the ASCII data from the Bluetooth Serial Port Profile.

Description
SHAKE’s main features include –
• Triple axis linear accelerometer with programmable full scale range of either +/2g or +/-6g
and resolution of 1mg
• Triple axis magnetometer with full scale range of +/- 2 Gauss and resolution of 1mGauss
• Two proximity capacitive sensors on the front of the enclosure that can measure human
body proximity to a distance of 5mm
• Two channel external analog input available over the Aux connector for connecting general
purpose sensors with a 3V 10mA regulated power source available on the same connector.
• Compass heading algorithm outputs heading information for any orientation of the SHAKE
with resolution of 1deg
• A three position side navigation switch for general use in the end user’s application
• Integrated programmable vibrating motor with braking capability allowing the display of
immediate haptic feedback to gestures or other stimuli.
• On board 64Mbit of FLASH memory for data logging
156
• Accurate Built in real time clock (RTC) for precise time stamping of data samples or events
• Effective resolution on all sensor channels is greater than 12bits
• All sensor channels sampled at 1kHz
• All sensor channels can have the transmission data rate adjusted between 25Hz and 256Hz
with tracking linear phase filtering to better than -50dB stopband attenuation
• Two internal expansion slots for add-on modules such as the triple axes angular rate sensor
(gyros)
• Firmware upgradable over the BluetoothTM radio or using a serial cable
• Dimensions just 53.6mm *39.7mm * 15.9mm
• Up to 10 hours operation on a single battery charge

♣ ♣ ♣ ♣ ♣

♣ ♣ ♣♣ ♣
♣ .
♣ ♣ ♣ ♣
3rd International Conference on Enactive Interfaces (Enactive /06) 157
Experimental stability analysis of a haptic system
Thomas Hulin1, Jorge Juan Gil2, Emilio Sánchez2, Carsten Preusche1 & Gerd Hirzinger1
1
DLR, Institute of Robotics and Mechatronics, Germany
2
CEIT and University of Navarra, Spain
Thomas.Hulin@dlr.de

An elementary prerequisite for haptic applications is to preserve stability. Numerous studies


have been presented in the past dealing with ensuring stability for haptic interfaces. The passivity
condition of Colgate et al. (1994; 1997) represents one of the most cited theoretical studies towards
a common stability condition. Although ensuring passivity of haptic devices is a general approach,
it has the disadvantages of being conservative in terms of stability.

The exact stability boundary for undelayed 1DoF haptic devices colliding with a virtual wall
was derived by Gil et al. (2004). They also showed that the human operator tends to stabilize an
impedance controlled system. Therefore, the worst-case in terms of stability is a system where the
operator is omitted.

Hulin et al. (2006a) enhanced this approach specifying the time delay as parameter. Different
control rules were compared inside the stable regions for undelayed and one sample step delayed
force. In further studies (Hulin et al., 2006b), the influence of the physical damping was also
included. The stability boundaries depending on the parameters of the impedance control were
numerically computed. They normalized the parameters involved in the control loop in order to
obtain dimensionless stability boundaries.

(a) (b) (c) (d)


Figure 1. Four different configurations of the DLR light-weight robot for the experiments: joint
angle = [0°, 30°, 45°, 90°].

This publication evaluates the dimensionless stability boundaries by experiments using the
DLR light-weight robot (see Figure 1). Two different phases of experiments are performed
concerning the dependency of (A) the mass (inertia) and (B) the time-delay on the stability
boundary. In all experiments only one robot joint is active at the same time. For the first-phase of
experiments the inertia is modified by changing the configuration of the robot as shown in Figure 1.
The effect of different time-delay is analyzed by adding a delay inside the control-loop in the
second set of experiments.

Experimental results
A bilateral virtual wall consisting of a virtual spring and damper is implemented in the third
axis of the robot (rotation angle φ in Figure 1). Limit stable parameter values are obtained when
sustained oscillations are observed increasing the stiffness. The environment is implemented in a
computer connected via Ethernet to the robot. The sampling rate is 1 kHz and the overall loop
158
contains a delay of 5 ms. No user is involved in the experiments. Figure 2 a) – c) shows the
experimental results introducing several fixed values for the virtual damping. The theoretical
behavior is depicted with dotted lines.

The results for the experiments with four different robot configurations are shown in Figure 2
a). The best correspondence between the theory and the experiments is achieved for the
configuration with maximum inertia (Figure 1 d). The second set of experiments is performed with
this configuration for different values of the time-delay. Figure 2 b) shows the stability boundary for
different delays (5 ms, 6 ms and 10 ms). A very large delay has also been introduced in the system
in order to receive a curved stability boundary. Figure. 2 c) shows the experimental stability
boundary for an overall delay of 55 ms. The beginning of the stability boundary for a delay of 10
ms is also shown. The theoretical stability curve in Figure 2 c) has been computed using the inertia
of the device in the configuration selected for the second set of experiments: 0.8 kg m2.
1000 2000 200

900 1800 180


td = 10 ms
800 1600 td = 5 ms 160

Critical stiffness KCR (Nm/rad)


Critical stiffness KCR (Nm/rad)

Critical stiffness KCR (Nm/rad)

700 1400 td = 6 ms 140

600 1200 120 td = 55 ms


d) c)
500 1000 100
td = 10 ms
400 800 80

300 600 60
b)
200 400 40

100 a) 200 20

0 0 0
0 1 2 3 4 5 6 0 5 10 15 20 0 5 10 15 20 25
Virtual damping B (Nm s/rad) Virtual damping B (Nm s/rad) Virtual damping B (Nm s/rad)

(a) (b) (c)


Figure 2. Experimental stability boundaries compared to the theoretical ones (dotted lines).

Conclusions
The set of experiments performed with the DLR light-weight robot shows that the
theoretically determined normalized stability conditions hold for real haptic devices. If the delay is
small, the stability boundaries can be assumed to be linear. Therefore, larger virtual stiffness can be
implemented when adding virtual damping. However, for large values of stiffness the hardware
limitations are rapidly reached.

In the theoretical conditions, the inertia does not play any role in the linear part of the stability
boundary. In practice, when the inertia is reduced, the experimental stability boundary does slightly
change.

References
Colgate, J. E., & Brown, J. (1994). Factors affecting the z-width of a haptic display. IEEE International Conference on
Robotics and Automation, San Diego, CA, May, pp. 3205-3210.
Colgate, J. E., & Schenkel, G. (1997). Passivity of a class of sampled-data systems: Application to haptic interfaces.
Journal of Robotic Systems, 14(1), 37-47.
Gil, J. J., Avello, A., Rubio, A., & Flórez, J. (2004). Stability analysis of a 1 dof haptic interface using the routh-hurwitz
criterion. IEEE Transactions on Control Systems Technology, 12(4), 583-588.
Hulin, T., Preusche, C., & Hirzinger, G. (2006a). Stability boundary and design criteria for haptic rendering of virtual
walls. 8th International IFAC Symposium on Robot Control, Bologna, Italy, September.
Hulin, T., Preusche, C., & Hirzinger, G. (2006b). Stability boundary for haptic rendering: Influence of physical
damping. IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, October.
3rd International Conference on Enactive Interfaces (Enactive /06) 159
Stabilization of task variables during reach-to-grasp movements
in a cluttered environment

Julien Jacquier-Bret, Nasser Rezzoug & Philippe Gorce


Laboratoire ESP EA 3162, University of Toulon – Var, France
gorce@univ-tln.fr

The study of movement variability may provide valuable information on the way constraints
are handled by the central nervous system (CNS). In the presence of joint redundancy (Bernstein,
1967), recent studies have evidenced that goal equivalent joint configurations might be used by the
CNS instead of stereotypical movement patterns. In particular, Scholz and Schöner (1999)
suggested that a subset of joint configurations (termed the uncontrolled manifold or UCM) might be
chosen such that variability in this subset does not affect the task variable while variability in the
orthogonal subset (ORT) does. If a chosen parameter is actively stabilized, variance within the
UCM (VUCM) should be higher than the variance within ORT (VORT). In the framework of upper-
limb goal directed movements, this paradigm was applied to study the influence of accuracy
constraints and learning for uni and bimanual reaching tasks (Tseng et al., 2003; Domkin et al.,
2005). The present work is focused on the transport phase of grasping movements performed in a
cluttered environment. This category of movements is very common and its consideration may
provide information for biomechanics, ergonomics or robotics. Also, it may help to better
understand how spatial constraints are handled by the CNS. The Cartesian position of the wrist and
elbow according to each direction (X-medial-lateral, Y-anterior-posterior and Z-vertical) were
considered as the task variables. Their stabilization was investigated in the frame of the UCM
paradigm. It was hypothesized that a high ratio VUCM/VORT would be observed at the end of the
movement and when the wrist and the elbow reach over an obstacle.

Methods
Twenty persons (178.7 ± 6.4 cm, 26.2 ± 5 years) volunteered as subjects. They were asked to
grasp a sphere (∅ 5.5 cm) located in front of them at a distance of 50 cm eventually reaching over
an obstacle. N=10 randomized trials were performed for each of 3 conditions: without obstacles
(WO), obstacle of height 15 cm (MO) and obstacle of height 20 cm (BO). The phase between
movement onset and object grasp was studied. The upper-limb and the trunk movements were
recorded with two Flock of Birds™ sensors and a Vicon™ system. According to the ISB
recommendations, joint angles were derived. The ratio VUCM/VORT was evaluated for the 3
conditions and for 10 time bins each corresponding to 10% of movement normalized duration. In
joint space, a deviation from the mean trajectory can be considered as the sum of two components:
Δθ = θ⊥+ θ|| (1). θ⊥ represents the variation of joint angles that contributes to task variable deviation
ΔX = J Δθ from its mean trajectory (linearized arm forward model) while θ|| corresponds to self
motion. θ|| is obtained by projecting Δθ on the null space of the arm Jacobian matrix J and then θ⊥ is
computed from (1). Finally, VUCM = Σ(θ||)2/N(n - d) and VORT = Σ(θ⊥)2/Nd. d is the dimension of the
task space and n the number of degrees of freedom of the arm model. A 3x10 repeated-measures
ANOVA was conducted separately for each direction (X, Y and Z) of the wrist and elbow
movements.

Results
A significant effect of time was evidenced for the wrist in the Y direction (F9, 171=57.023,
p<0.0001) (Figure 1. A/). Newman-Keuls post-hoc tests (p<0.01) evidenced a lower ratio during the
middle phase of the movement (2.68±0.91 at 40%, 2.23±0.49 at 50% and 2.66±0.64 at 60%)
compared to the early (6.09±2.00 at 10%, 5.35±1.76 at 20% and 3.85±1.15 at 30%) and late phases
(4.07±1.20 at 70%, 5.78±1.80 at 80%, 6.75±2.64 at 90% and 7.18±3.00 at 100%). Also, an
interaction between time and condition (F18, 342=5.1052, p<0.00001) was observed for the elbow in
160
the Z direction. Post-hoc tests evidenced an increase of the ratio at 50% of total duration for MO
(3.09±1.45, p<0.05) and BO (2.98±1.15, p<0.05) compared to WO (1.92±0.63) (Figure 1. B/).
10 9
A/ B/
** **

VUCM / V ORT RATIO - ELBOW - Z AXIS


WITHOUT OBSTACLE
VUCM / VORT RATIO - WRIST - Y AXIS
9 8
MEDIUM OBSTACLE
8 7 BIG OBSTACLE
7 6

6 5

5 4
*
4 3

3 2

2 1

1 0
10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
TIME TIME

Figure 1. VUCM/VORT ratio A/ Wrist, Y axis, B/ Elbow, Z axis (*, p<0.05, **, p<0.01).

Discussion
The stabilization of task variables during a reach-to-grasp movement was investigated. Results
showed that a measure of the level of task variables control may vary according to time and to
experimental conditions. The Y direction corresponds to the movement main axis while the Z
direction is the most constrained by the obstacles. Two hypotheses may account for the lower
values of the ratio during the middle phase of the movement for the wrist in the Y direction. Firstly,
it may be more difficult to control the task variables during this phase because it coincides with the
maximal wrist velocity and joint interaction torques (Hollerbach, 1982; Tseng et al. 2003). The
second possible explanation is that it may be not necessary to control strictly this parameter at the
middle of the task because deviation from the mean trajectory at this moment may not affect the
final outcome of the task. This observation may be in agreement with the minimum intervention
principle that states that deviations from a reference trajectory are corrected only if they degrade the
task performance (Todorov & Jordan, 2002). In the same line of view, the Y position of the wrist is
highly constrained at the end of the movement. The high values at the beginning of the task simply
reflect the fact that the subjects’ initial position was standardized. The results relative to the Z axis
evidence that the obstacle affect the organization of joint movements to control the elbow vertical
position. Indeed, the passage over the obstacle is made approximately at the middle of the task
duration and it coincides with an increase of the ratio for the MO and BO conditions compared to
WO. Other studies have shown the influence of obstacles on the coordination between the transport
and grasping phases (Dean & Bruwer, 1994; Saling et al., 1998). In the present study, results are
presented that support the hypothesis that flexible goal equivalent joint configurations may be used
in adaptation to spatial constraints represented by obstacles.

References
Bernstein, N. A. (1967). The co-ordination and regulation of movements. Pergamon, London.
Dean, J., & Bruwer, M. (1994). Control of human arm movements in two dimensions: path and joint control in avoiding
simple obstacles. Experimental Brain Research, 97, 497-514.
Domkin, D., Laczko, J., Djupsjöbacka, M., Jaric, S., & Latash, M. L. (2005). Joint angle variability in 3D bimanual
pointing: uncontrolled manifold analysis. Experimental Brain Research, 126, 289-306.
Hollerbach, J. M., & Flash, T. (1982). Dynamic interactions between limb segments during planar arm movement.
Biological Cybernetics, 44, 67-77.
Saling, M., Alberts, J. L., Bloedel J. R., & Stelmach, G. E. (1998). Reach-to-grasp movements during obstacle
avoidance. Experimental Brain Research, 118, 251–258.
Scholz, J. P., & Schöner, G. (1999). The uncontrolled manifold concept: identifying control variables for a functional
task. Experimental Brain Research, 126, 289-306.
Todorov, E., & Jordan, M. (2002). Optimal feedback control as a theory of motor coordination. Nature Neuroscience.
5(11), 1226-1235.
Tseng, Y., Scholz, J. P., Schöner, G., & Hotchkiss, L. (2003). Effect of accuracy constraint on the underlying joint
coordination of pointing movements. Experimental Brain Research, 149, 276-288.
3rd International Conference on Enactive Interfaces (Enactive /06) 161
CoMedia: Integrating context cues into mobile group media for spectators
Giulio Jacucci, Antti Salovaara, Antti Oulasvirta, Tommi Ilmonen & John Evans
Helsinki Institute for Information Technology, Finland
giulio.jacucci@hiit.fi

Introduction
We explore how sensor derived cues of context and activity can support the creation and
sharing of mobile media in groups of spectators. Basing on two years of research on groups using
mobile media in large-scale events (Salovaara et al. 2006) we present CoMedia, a smart phone
application that integrates awareness cues with media sharing: a communication space is augmented
with various sensor-derived information about group’s environment, digital presence, and physical
situation. Moreover, an application of Visual Codes allows for binding discussions to the
environment. The result is integrating information about users’ digital and physical presence into
awareness cues, and supporting local interaction. This indicates the opportunity for enactive
interfaces to explore multimodal and sensorial strategies to mediate cues in more natural and
intuitive ways and affect the collective action in the group.

Figure 1. Screenshots of CoMedia’s Media Story structure

CoMedia is based on the concept of Media Stories (or Stories for short). Stories organize
group’s messages into dedicated discussion spaces that also turn into shared media albums. New
Stories can be created on the fly by entering a title and selecting the desired members that the user
wants to invite to the Story. The leftmost screen in Figure 1A displays the list of Stories that the
user is a member of.

Awareness Cues. Cues about other users’ activity are embedded in the appropriate places in
the user interface. Media Story list view (Figure 1A) the name of the last contributor (e.g., Anne
preceded with a pin symbol), time passed since last contribution (e.g., 1 minute or 6 hours) and the
number of people currently viewing the Story (1 and an icon of a person). In the message list
(Figure 1B), strings like ”18h” tell how much time has passed since posting the message, the colour
of the envelope indicates how recently the message has been viewed by some of the users (red
denoting the last 10 minutes), and the number next to an icon of an open book tells how many
“viewings” the message has collected from all the users in total. Figure 1D displays information to
users about times of the different viewings for that message, along with the names of the viewers
and lists of those people’s phones that have been next to the viewer when the viewing took place.

Augmented Member Lists. Figure 2. shows each user’s current activities within CoMedia, and
also more general contextual data. When a user is selected in the member list, more detailed
information is displayed about the person (Figure 2C). Colour-coded person-shaped icons that tell
whether the others have CoMedia running currently in the phone’s foreground (green), staying in
the background (orange), or disconnected (red). The user details view shows the Story to which the
member has most recently posted a message. User’s current locations are shown as informed by a
GSM positioning service, along with the durations that they have stayed in those locations. A small
162
phone icons is either blue meaning that less than 5 minutes has passed and gray representing a
longer interval of the last interaction with the phone.

Figure 2. Augmented member lists in CoMedia. A) Global member list that shows the status of
all the users; B) A story-specific member list; C)Detailed view of an individual CoMedia user.
D) Visual codes for local interaction.

VisualCodes. To explore new ways of interacting with the surrounding environment,


CoMedia lets users associate 2Dvisualcodes to identify Stories. When a code has been mapped to
denote a Story, any CoMedia user can include himself or herself in the Story by taking a picture of
the code with CoMedia. This inclusion mechanism bypasses the normal procedure that requires the
creator of the Story to go to a menu and select the user as one of the members. Stickers may be
attached to artifacts and places that have a shared function (such as a hotel room or a cottage when
being together away from home). This can be used to coordinate the use of shared resources. Seeing
a sticker attached to a shared artefact also reminds people of a possibility to communicate through
the system. In addition to this, visual codes can be used as shortcuts to certain commonly visited
Stories.

Towards enactive cues


We believe that current mobile applications fail to support two specific aspects of collectivity:
the participative creation of media and its collective sense making. The former aspect points to how
media can be the product of complex group interactions, whereas current applications assume that
media is the product of individual authors or senders. The latter aspect points to opportunities to
support shared experience of mobile media and foster its creation, for example by enriching it with
awareness cues. These aspects have been poorly addressed also due to the distinction of work in
two separate streams of research: the one commonly called presence (or awareness) applications on
the one hand – and the mobile media sharing applications on the other. We explored the opportunity
of integrating these approaches: real-time awareness and interaction across digital and physical
realms through multimedia in a mobile phone. This results in exploiting real-time processing of user
activity, proximity of devices, short range communication, and physical handles to digital media for
the benefit of media-sharing activities.

Inspired by the paradigm of enactive interfaces we are further exploring multimodal and
sensorial strategies:
y to extend the sensing of information including more beyond physical proximity or digital
activity through physiological sensors and emotion awareness
y to find novel solutions for mediating cues that go beyond portraying symbols on the mobile
terminal’s screen through mixed reality interfaces to better combine perception and action.

References
Salovaara, A., Jacucci, G., Oulasvirta, A., Saari, T., Kanerva, P., Kurvinen, E., & Tiitta, S. (2006). Collective creation
and sense-making of mobile media. In Proceedings of the SIGCHI conference on human factors in computing
systems (CHI'06), pp. 1211-1220. Montreal, Canada, April 21-27. New York, NY: ACM Press.
3rd International Conference on Enactive Interfaces (Enactive /06) 163
Motor images dynamics: an exploratory study
Julie Jean1, Lorène Delcor2 & Michel Nicolas3
1
IRePS, France
2
LAMECO (EA3021), University of Montpellier III, France
3
ISOS (EA 3985), University of Bourgogne, France
lorene.delcor@univ-montp1.fr

The aim of this study was to analyse the dynamics of motor images. Mental rehearsal consists
in repeating our own movements without performing them (Cadopi & D’Arripe-Longueville, 1998).
This practice is known to enhance motor performance (Beauchamp, Bray & Albinson, 2002) and
has been widely studied among this past 30 years (Farahat, Ille & Thon, 2004).
Researches leaded in the framework of represented action as conceived by Jeannerod (1999)
and the recent development of time series analysis allowed us to re-question this field of research.
Until now, researches about mental practice were related to its effect on performance, whereas
questions about the learning of motor images and their evolution were rarely approached. However,
studies related to stability and instability properties of images in memory show that images contents
can be extremely variable throughout time (Giraudo & Pailhous, 1999). These authors studied the
temporal evolution of images contents and they had shown images building and structure processes
in memory. These time-dependant properties would be shared between all images.
This kind of research is allowed by recent development of time series analysis under
consideration of cognition or control motor. Spray and Newell (1986) analysed discrete times series
in motor learning and showed that analysis produced model which informs on on-going process of
observed behaviours. These authors analysed time series by ARIMA modelling (Box & Jenkins,
1976). Analysis consists in identifying the structure of time series, in particular the inter-trial
relation, which describes the best the evolution in time of behaviours. Hypothesis being that the
comprehension of the structure of time series by ARIMA models inform on processes, which have
produced behaviours.
When mental rehearsal is used with athletes, the problem of instability of motor images is
essential because the efficiency of this practice depends on images content accuracy (Denis, 1989).
As this content is changing during rehearsals, we wished to study time-dependent properties
of motor images in the framework of an action-research.
Through this exploratory research, we studied the temporal evolution of motor image and
more precisely imagery time duration as revealing the image building process. As we know, motor
imagery time duration is an objectivable quantitative measure of processes underlying motor
representation (Decety, 1995).

Method
Subjects: five high-level athletes practicing decathlon involved in a mental rehearsal training.
Procedure: A previous phase consisted in mental rehearsal learning. In the experimental phase,
each subject chose four movement sequences between the ten decathlon events. Then, he was asked
to repeat mentally this movements sequence during five sessions of ten trials.
Dependant variables: Each motor image duration was timed with a stopwatch in order to build the
individual time series.
These time series were analyzed using ARIMA procedures, which produce iterative equations
to model the process underlying time series generation (Box & Jenkins, 1976).

Results
Time series analysis showed two qualitatively distinct models ARIMA.
One model is an autoregressive model ARIMA(1,0,0), it is obtained for 4 time series, and obeys the
following equation:
yt = α + φ1yt-1 + εt,
164
where α is a constant, φ1 is the auto-regressive coefficient, and εt is the error associated with the
current value. (Auto-regressive coefficients of time series analysis were: 0.47; 0.56; 0.39; 0.62)
The other one is a white noise model, ARIMA(0,0,0) which is obtained for 5 time series and
characterised by the following equation:
yt = μ+ εt
where μ represents the mean of the time series, and εt is the error associated with the current value.

Discussion
Theoretical autoregressive ARIMA model realizes oscillations of the measure around a value
(the constant or the zero) which is view as an attractor (as a “space” where all converge, Stewart,
1998). In the frame of our study, we observed a single relaxation of time duration toward a stable
value.
Moreover, this model supposes a deterministic inter-trial relation, which links trials between
them. This relation suggests that image at one point in time is affected by previous images and that
present and past images determine, to a certain degree, future image (Spray & Newell, 1986).
At last, this model has been found in the modelisation of memorisation of movements
sequence and tends to characterise learning processes (Delcor, Cadopi, Delignières & Mesure,
2003).
A white noise model suggests that behaviours are not linked in time; images don’t again
evolve in function of time. This model does not describe on-going process during mental rehearsal.
These two distinct models could be explained by different effects of mental rehearsal on
performance.
These results can help us to understand processes underlying motor representations
construction but also to enhance our field practice in sport performance optimization.
However, further researches are needed to extend these results.

References
Beauchamp, M. R., Bray, S. R., & Albinson, J. G. (2002). Pre-competition imagery, self-efficacy and performance in
collegiate golfers. Journal of Sports Sciences, 20(9), 697-705.
Box, G. E. P., & Jenkins, G. (1976). Time Series Analysis: Forecasting and Control, San Francisco, Holden-Day.
Cadopi, M., & D’Arripe-Longueville, F. (1998). Imagerie mentale et performance sportive. In P. Fleurance (Ed.),
Entraînement mental et sport de haute performance (pp.165-194).
Decety, J. (1995). The neurophysiological basis of motor imagery. Behavior Brain Research, 77, 45-52.
Delcor L., Cadopi, M., Delignières, D., & Mesure, S. (2003). Dynamics of the memorization of a morphokinetic
movement sequence in human. Neuroscience Letters, 336, 25-28.
Denis, M., (1989). Image et cognition. Paris, PUF.
Farahat, E., Ille, A., & Thon, B. (2004). Effect of visual and kinesthesic imagery on the learning of a patterned
movement. Journal of Sport Psychology, 35, 119-132.
Giraudo, M. D., & Pailhous, J. (1999). Dynamic instability of visuo-spatial images. Journal of Experimental
Psychology: Human Perception and Performance, 25(6), 1495-1516.
Jeannerod, M. (1999). To act or not to act. Perspectives on the representation of action. Quarterly Journal of
Experimental Psychology, 52, 1-29.
Spray, J. A., & Newell, K. M. (1986). Time series analysis of motor learning: KR versus non-KR. Journal of Human
Movement Studies, 5, 59–74.
Stewart, I. (1998). Dieu joue-t-il aux dès? Les mathématiques du chaos. Manchecourt, Flammarion.
3rd International Conference on Enactive Interfaces (Enactive /06) 165
Exploring the boundaries between perception and action in
an interactive system

Manuela Jungmann1, Rudi Lutz1, Nicolas Villar2, Phil Husbands1 & Geraldine Fitzpatrick1
1
Department of Informatics and Artificial Intelligence, University of Sussex, UK
2
Infolab21, Computing Department, Lancaster University, UK
mj43@sussex.ac.uk

The relationship which interactive art establishes between author, work, and audience is
unprecedented in the context of the hybridisation of man and his artefacts. As these environments
draw closer to the autonomy of organisms, combining scientific and artistic concepts, they begin to
build an intimate engagement with the audience. We are proposing an interactive installation that
implements the concept derived from the theory of autopoiesis. In the setting of this interactive
environment, we are conducting user studies to explore links between whole body movements,
perception, and social interaction.

Introduction
The digital revolution has produced a new genre in art. Interactive art embraces the
relationships of the body and environment through technology. Body, environment and technology
can be considered meta-components through which interactive relationships are formed. Digital
technology is now firmly embedded in society, and has created conditions that facilitate a cohesive
matrix between human – human and human and environment. This development is reflected in the
engagement of meta-components in interactive art which has as result broadened in definition and
complexity. The overall trend in this genre of art has taken on a communication focus that is
increasingly body-centric evolving along the cognitive paradigm of a constructed reality (Stewart,
1996). Early engagement with interactive art entailed simple push-button movements that limited
the body to a machine-like input device that navigated a screen display. From here the logical
evolution preceded with gestural15input providing “mirror” interactivity (Lavaud, 2004), the
dynamic of imitation and immediate feedback loops between human and image. More sophisticated
interactive artworks are now beginning to evolve, which reflect human body-consciousness through
active, immersive participation using complex, full-body motion2. 6This body-centric perspective
equally translates to the ubiquitously deployed technology. Initial simple database design stands
next to intelligent, emergent, and adaptive systems which mimic the autonomy of human beings.
According to artists Lavaud, Bret, and Couchot these systems are paths of investigation, they
liberate the artist from being the author in the conventional sense since the outcome is ongoing and
open-ended. Couchot has located the hybridisation of these works, the crossing between
heterogeneous, technical, semiotic, and aesthetic elements, in the space between the work and the
interactor, connecting human and machine through intimacy. Interactive environments that
synthesize theoretical knowledge from cognitive science and artificial intelligence establish a
relationship between art and science that is practical and operational (Couchot, 2005). They can
therefore offer context-related insight when studying new theoretical grounds.

Poster summary
The research featured in this poster is a participatory, interactive installation exploring social
coordination and perception. We are modelling the interaction on Maturana and Varela’s (1980)
definition of the communication of autopoietic organisms with the aim to investigate the embodied
mind within the boundaries of a social network. Autopoiesis looks at communication as circular

1
For example: Monika Fleischmann and Wolfgang Strauss “Liquid Views”
2
Examples of such works are: “passus: A Choreographic System for Kinaesthetic Responsivity” by Susan Kozel and
Gretchen Schiller http://www.meshperformance.org/ and “Intimate Transactions” by Keith Armstrong, Lisa O’Neill,
and Guy Webster. http://www.intimatetransactions.com/
166
organization in contrast to the traditional information-exchange metaphor analogous to a computer.
Our system relies on a tight coupling between whole body movements, perception, and social
interaction. Through a spatial arrangement in the environment, participants stand in a circular,
dedicated area facing inwards while interacting with a visual projection. The interaction builds on
the participants’ reciprocal coordination of whole-body, swaying movements that contribute as
input to create a visual stimulus. The visual stimulus is an image of a waveform generated by the
swaying movements of the participants which can move from a turbulent to a stable state depending
on the collective, coordinated movements. A continuous visual feedback to the participants’
swaying movements provides a closed loop. The image is projected top-down into the centre of the
ring of participants. A network is formed between the participants, following a circular causality
between the individual participants and the group. For example, a participant might initiate a
coupling, such as eye contact, which encourages a swaying action in the other interactor who in turn
alters the state of the stimulus and hence affects what is perceived by the entire group. The state of
the stimulus is being generated and re-generated by networked, social interaction between the
members of the group.
Through a series of user studies we probe this interactive environment for answers relating to
the participants’ coordination and communication with the system. Will participants synchronize
their movements to achieve a stable state in the visual stimulus? Under what circumstances will
they synchronize and coordinate their movements? How will the participants learn to communicate?
Will they know and be able to communicate in any detail how they achieve changes in the display?
If the participants stabilize the state of the visual stimulus what happens if that state is disturbed by
the system?

We aim to find a match between two views (artistic and formal) on several central questions:

ƒ Under what circumstances will participants learn to synchronize the formation of a stable
visual stimulus?
ƒ What are the important variables in the time evolution of the system?
ƒ Our exploratory aim is to understand under what conditions the behaviour of the system,
including the human participants, will show interesting regularities over time (attractors) in terms of
interpersonal communication?

We have currently deployed a prototype that accommodates two participants. While gradually
building up to five participants, the initial smaller scale set-up will provide insights through user
studies determining the characteristics of a natural swaying movement and the algorithmic
adjustments that have to be made to accommodate the visual display. Furthermore, we will study
what entails a convincing correlation between the visual depictions of a moving waveform when
translated from the participant’s movement. This poster will report on the results obtained through
video footage, logging and charting of swaying movements, and interviews of the participants.

References
Bret, M. (2006). http://www.demiaux.com/a&t/bret.htm.
Couchot, E. (2005). Media Art: Hybridization and Autonomy. In First International Conference on the Media Arts,
Sciences and Technologies Banff, Canada: Banff New Media Institute.
Lavaud, S. (2002-2004). Les image/systèmes: des alter egos autonomes. Université Paris8. http://hypermedia.univ-
paris8.fr/seminaires/semaction/seminaires/txt01-02/journees0602/sophie.htm.
Maturana, H. R., & Varela, F. J. (1980). Autopoiesis and Cognition: The Realization of the Living. Reidel, Dordrecht.
Stewart, J. (1996). Cognition = life: Implications for higher-level cognition. Behavioural Processes, 35, 311-326,
Elsevier.
3rd International Conference on Enactive Interfaces (Enactive /06) 167
Interaction between saccadic and smooth pursuit eye movements: first
behavioural evidences

Stefano Lazzari1, Benoit Bardy1, Helena Grillon2 & Daniel Thalmann2


1
Motor Efficiency and Deficiency Laboratory, University Montpellier 1, France
2
VRLab, Ecole Polytechnique Fédérale de Lausanne, Switzerland
benoit.bardy@univ-montp1.fr

Introduction
The study, conception and realisation of human-computer interfaces cannot ignore a careful
analysis of eye motion, both from a behavioural and from a functional point of view. The goal of
this study was to investigate the dynamic behaviour of the oculomotor system during the transition
between slow and fast movements. In other words, it was to analyse the transition from purely
smooth pursuit (SP) ocular movements to eye movements with a prevailing presence of saccades
(SA) and vice-versa. In the literature these two subsystems (SP and SA) are often considered as
independent systems sharing a common physical input (the retina) and output (the eye plant). As a
consequence they have been often studied and modelled separately (Lazzari et al, 1997; Robinson,
1973; Young and Stark, 1963).
More recent studies on the cerebral pathways involved during SP movements and on the
cortex areas activated during both SP and SA movements (Krauzlis et al, 1997; Krauzlis, 2004)
suggest that SP and SA systems have very similar functional architectures. These
neurophysiological findings bring to consider the two output movements as different outcomes
from common sensory–motor functions.
The experiment reported below aimed at verifying these findings from a behavioural point of
view, evidencing the interaction between the SP system and the SA. Our hypothesis was that at least
one of the 3 classical parameters characterising saccadic movements (i.e. amplitude, duration and
peak velocity) is influenced by the direction of the transition: slow-fast versus fast-slow.

Methods
Participants
20 subjects (17 males and 3 females, aged between 21 and 45) volunteered to participate in
this study after giving their written informed consent. Their visual acuity was normal, after
correction for 10 of them.
Apparatus
Participants sat in front of a 17'' flat screen, laying their chin on a fixed support in order to
facilitate the maintenance of their posture. The distance between their eyes and the centre of the
screen was 56.5 cm. This imposed a unitary degree-centimetre conversion (1° = 1cm) between the
angle crossed by the eyes and the distance measured on the screen. A target constituted by a green
dot of 3 mm diameter on a uniform black background was projected on the screen. The target
position was controlled by a computer using the Psychophysics Toolbox extensions for Matlab
(Brainard, 1997; Pelli, 1997). Horizontal eye movements were recorded by means of an infra-red
corneal reflection device (VisionTrack, ISCAN Inc., Burlington, MA) at a frequency of 60 Hz.
Before the start of the each session, a calibration was performed using a rectangular array of 5
targets that contained the zone of the oculomotor field explored during the following recording
sessions.
Experimental design
The target oscillated on the horizontal plane following a pure sinusoidal movement with an
amplitude of 23 cm. Participants were asked to follow the target with their eyes as precisely as they
could, without moving the head. The task was performed in two different conditions: (i) the target
started oscillating at the frequency of 0.1 Hz and increased its frequency until 1 Hz (the acceleration
condition or ACC) and (ii) the target started moving at 1 Hz and decelerated until 0.1 Hz (the
deceleration condition or DEC). The acceleration/deceleration rate was 0.02 Hz every 3 oscillating
168
cycles. Each condition was executed twice, in a pseudo-random order.

Results and Discussion


The macroscopic parameter we used to compare eye movements in the two experimental
conditions was the number of saccades performed. No significant difference was evidenced
between ACC and DEC conditions (p > 0.05). This result indicates that the participants did not
change their general way to perform the task. The number of saccades produced was not influenced
by the direction of the transition between slow and fast movements, excluding the presence of a
high level hysteresis when moving from purely SP to SA movements and vice versa.
Going further in our analysis, we addressed the inner characteristics of each saccade. The
main parameters characterising a saccade are its duration, its amplitude and its peak velocity.
Comparing the two experimental conditions, we evidenced a significant difference in saccade
duration (F1,19 = 9.98; p < 0.01) and in saccade amplitude (F1,19 = 6.27; P < 0.05), while no difference
was found for peak velocity (p > 0.05). Saccades’ mean duration was 109 ms ± 1.7 in ACC
condition and 104.9 ms ± 1.5 in DEC condition. Their mean amplitude was 9.29° ± 0.24 in ACC
and 8.81° ± 0.26 in DEC. Several studies have suggested that there are relatively fixed relationships
between the amplitude, duration and peak velocity of saccades (Fuchs, 1971; Abrams et al, 1989).
These results were confirmed by our experiment. Moreover the differences pointed out between
ACC and DEC conditions suggest a possible interaction between the two oculomotor subsystems.

Conclusion
The results we obtained confirm the influence of SP-SA transitions on eye movement
parameters. These preliminary results did not allow us to fully understand the nature of this
influence and all its implications. Nevertheless, they give a glimpse on the possible interaction
between the two oculomotor subsystems at a behavioural level. Further analyses should allow us to
better describe the nature of this interaction.

References
Abrams, R. A., Meyer, D. E., & Kornblum, S. (1989). Speed and accuracy of saccadic eye movements: characteristics
of impulse variability in the oculomotor system. Journal of Experimental Psychology: Human Perception and
Performance, 15, 529-43.
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433-436.
Fuchs, A. F. (1971). The saccadic system. In P. Bach-y-Rita, C. C. Collins, & J. E. Hyde(Eds.), The Control of Eye
Movements (pp.343-362). New York: Academic Press.
Krauzlis, R. J. (2004). Recasting the Smooth Pursuit Eye Movement System. Journal of Neurophysiology, 91(2), 591-
603.
Krauzlis, R. J., Basso, M. A., & Wurtz, R. H. (1997). Shared motor error for multiple eye movements. Science,
276(5319), 1693-1695.
Lazzari, S., Vercher, J. L., & Buizza, A. (1997). Manuo-ocular coordination in target tracking. I. A model simulating
human performance. Biological Cybernetics, 77(4), 257-266.
Pelli, D. G. (1997) The Video Toolbox software for visual psychophysics: Transforming numbers into movies. Spatial
Vision, 10, 437-442.
Robinson, D. A. (1973). Models of the saccadic eye movement control system. Biological Cybernetics, 14(2), 71-83.
Young, L. R., & Stark, L. (1963). Variable feedback experiments testing a sampled data model for eye tracking
movements. IEEE Transactions Human Factors in Electronics, 1, 38–51.
3rd International Conference on Enactive Interfaces (Enactive /06) 169
The effect of discontinuity on timing processes

Loïc Lemoine, Kjerstin Torre & Didier Delignières


Motor Efficiency and Deficiency Laboratory, University Montpellier 1, France
loic.lemoine@univ-montp1.fr

Introduction
Timing has been viewed for a long time as a single ability shared between all rhythmic tasks.
However, Robertson et al. (1999), comparing variability in continuous and discontinuous tasks,
showed the exploitation of different timing processes according to task conditions. Schöner (2002)
pleaded for the distinction between event-based timers, which are supposed to work on the basis of
cognitive events produced by central internal clocks, and dynamical timers, which are viewed as
peripheral and using the properties of limb motion to define time intervals.
Zelaznik et al. (2002) suggested that the use of event-based or dynamical timing control could
be related to the continuous vs. discontinuous nature of the task. Nevertheless, some studies have
shown that in some cases participants don’t use the expected timer (Delignières et al., 2004;
Lemoine et al., in prep). The aim of this study was to test the effect of motion discontinuity on
timers’ exploitation.
Delignières et al. (2004) showed that this distinction could be revealed through the
examination of bi-logarithmic power spectra (Figure 1). They collected interval time series in
tapping and forearm oscillations, two tasks supposed to elicit the exploitation of an event-based
timer for the former, and a dynamical timer for the latter. They showed that event-based timers were
characterized by a positive slope at high frequencies, and dynamical timers by a slight flattening.

Figure 1. Power spectra in log-log coordinates from tapping series (left panel) and oscillation
series (right panel).

We didn’t know, nevertheless, if this method gave reliable results on the distinction between
timers. We recently proposed a new method, based on the analysis of windowed autocorrelation
functions, which allows an efficient discrimination between event-based and dynamical timers
(Lemoine et al. 2006).

Method
Ten participants practiced two discontinuous tasks (tapping and discrete wrist oscillations)
and two continuous tasks (circle drawing and forearm oscillations). Each task was performed
following two frequency conditions (2 and 1.25Hz), twice per frequency condition in two separate
sessions. We used the mean lag one value of the Detrended Windowed Autocorrelation Function
(DWAF), denoted γ (1) , to determine the nature of the exploited timer (Lemoine et al., 2006). γ (1) is
supposed to be negative for event-based timers, and positive for dynamical timers.
170
Results
As reported in figure 2, we obtained on average negative γ (1) values in discontinuous tasks
and positive values in continuous tasks. A repeated-measure ANOVA 2(Session) x 4(Task) x
2(Frequency) revealed significant effects for Task and Frequency. Moreover, γ (1) was significantly
correlated between sessions (r = 0.91). Nevertheless, as indicated in figure 2, right panel, in some
cases participants didn’t use the expected timing process.

0.15

M ea n m iscla ssifica tio n p er freq u en cy


3
2 Hz
0.10
2.5 1.25Hz
0.05
Mean γ(1) indices

0.00 2

co n d itio n
-0.05
1.5
-0.10
1
-0.15

-0.20 0.5
2

25
2

25

25
2

0
g
.2
p

sc
sc

1.
1.

in

1.
Ta

.O
O

g
p

sc
sc

in
Ta

ra

.W

.O
O

w
D

Tapping Oscillation Circle drawing D.W.osc


D
ra

.W
D

Figure 2. Left panel: Mean γ (1) indices for each task. Right panel: Mean number of
occurrences of the non predicted timer per frequency condition for each session (right panel).

Discussion
Results showed a clear effect of discontinuity on the preferentially used timing process. The
two discontinuous tasks (tapping and intermittent drawing) are characterized by negative γ indices
while the two continuous tasks (oscillation and circle drawing) are characterized by positive γ
indices. The effect of discontinuity on the preferential use of event-based timers could be
considered as reliable. This effect is stable over sessions with reproducible γ indices from one
session to the next (no session effect, and significant correlation between sessions). In some cases
participants didn’t exploit the predicted timing process. Task difficulty and practice could explain
the observed differences between frequency conditions.

References
Delignières, D., Lemoine, L., & Torre, K. (2004). Time intervals production in tapping and oscillatory motion. Human
Movement Science, 23(2), 87-103.
Lemoine, L., Torre, K., & Delignières, D. (2006). Detrended-windowed autocorrelation function: A new method for
detecting timer’s exploitation. 9th European Workshop on Ecological Psychology, pp. 63. 5-8 July, Groningen,
Netherlands.
Lemoine, L., Torre, K., & Delignières, D. (in preparation). Fractal and spectral indices in rhythmic tasks: could they be
used for timers’ exploitation distinction?
Robertson, S. D., Zelaznik, H. N., Lantero, D. A., Bojczyk, K. G., Spencer, R. M., Doffin, J. G., & Schneidt, T. (1999).
Correlations for timing consistency among tapping and drawing tasks: evidence against a single timing process
for motor control. Journal of Experimental Psychology: Human Perception and Performance, 25(5), 1316-1330.
Schöner, G. (2002). Timing, clocks, and dynamical systems. Brain and Cognition, 48, 31-51.
Wing, A. M., & Kristofferson, A. B. (1973). The timing of interresponse intervals. Perception and Psychophysics,
13(3), 455-460.
Zelaznik, H. N., Spencer, R. M. C., & Ivry, R. B. (2002). Dissociation of explicit and implicit timing in repetitive
tapping and drawing movements. Journal of Experimental Psychology: Human Perception and Performance,
28(3), 575-588.
3rd International Conference on Enactive Interfaces (Enactive /06) 171
The effects of aging on road-crossing behavior:
The interest of an interactive road crossing simulation

Régis Lobjois1, Viola Cavallo1 & Fabrice Vienne2


1
Laboratoire de Psychologie de la Conduite, INRETS, France
2
Modélisations, SImulations et Simulateurs de conduite, INRETS, France
regis.lobjois@inrets.fr

Road crossing requires the pedestrian to visually judge whether there is sufficient time to
cross before a vehicle arrives, and possibly to adapt his walking speed to traffic characteristics
while crossing. To this end, the pedestrian has to relate the estimated time-to-arrival of the vehicle
to the estimated crossing time depending on his walking speed. Crossing a road may then be highly
difficult for older adults as suggested by age-related changes in motion perception (Snowden &
Kavanagh, 2006), in collision detection (De Lucia et al., 2003) or as attested by statistics of
accidents (ONISR, 2005). Various methods have been used to study gap acceptance in road-
crossing situations. Whereas naturalistic observation is the most realistic setting for data collection,
accidents cannot be ruled out and there is no experimental control of the crossing situation (e.g.
time gaps, vehicle speed). Recently, Oxley et al. (2005) used a judgement task to investigate the
effects of age on road-crossing decisions. Computer animations of road crossing situations were
projected on a large screen, and the participants had to decide (by pressing a button) whether or not
to cross in relation to the traffic situations. Results revealed that the percentage of unsafe decisions
(safety margin < 0) increased dramatically for older-old adults (70 %) in comparison with younger
(18 %) and younger-old adults (19 %). However, te Velde et al. (2005) compared the participant’s
crossing decisions in a judgement and road-crossing task (actual walking across the road if possible)
and showed that the percentage of unsafe decisions was more important in the judgement task. The
validity of the judgement task is therefore questionable and the absence of a calibration between
perception and action has been pointed as the main limitation. On this basis, this study aimed at
comparing crossing decisions between younger and older adults in both a judgement and an
interactive crossing task.

Method
Seventy-eight participants aged 20 to 30, 60 to 70 and 70 to 80 took part in the study. Women
and men were equally distributed anong each age group (n = 26). The experimental device included
a portion of an experimental road (4.2 m), an image generation and projection system (3 large
screens), and a response recording system. The images were calculated and projected according to
the participants’ eye-height. In the crossing task, the visual scene was dependent on the pedestrian’s
displacement. The interactive updating of the visual scene according to the pedestrian’s position
was achieved by a movement tracking system which recorded the pedestrian’s motion (30 Hz). In
the judgement task, a two-response button panel recorded the participant’s yes or no responses. The
participants performed the judgment and the crossing tasks in a counterbalanced order. In both taks,
participants were waiting at the kerb and had to decide if they could cross between the two
approaching cars (and, if possible, to walk across the street in the crossing task). Vehicle speed (40,
50 and 60 km/h) and time gap between the cars (from 1 to 8, with a step of 1 s) were manipulated.
Each of the 24 speed-time gap combinations was presented 5 times, amounting 120 trials per task.
All trials were scored to whether or not the participants judged to cross the street and several
indicators have been analyzed to compare decisions in both tasks.

Results and Discussion


Statistical analyses on the mean accepted time gap (i.e., time gap at which behavior changed
from not crossing to crossing) revealed as expected a main effect of age, the mean time gap
increased with age (M = 3.3, 3.7 and 4.2 s for each age group), but neither the factor task nor
172
interactions involving this factor was significant. Althought the mean time gaps were similar in the
judgement and the crossing tasks, the structure of the decisions was different.

Crossing decisions were categorized into unsafe decisions (accepted crossings where safety
margin was less than 0 s) and missed opportunities (rejected crossings where individual mean
crossing time was shorter than the sum of the time gap of the approaching car and individual mean
initiation time). Results (Table 1) showed that the percentage of unsafe decisions did not differ as a
function of age but that participants had more unsafe decisions in the judgement compared to the
crossing task. Regarding missed opportunities (Table 1), results revealed that younger adults missed
less opportunities in the crossing task, whereas no difference appeared between both tasks for older
adults. As time-to-arrival is known to be the functionnal temporal variable to guide action, we at last
compared the correlation coefficients (R² transformed to Z-score) between accepted crossings and
time gaps of each group in each task. The results (Figure 1) revealed a better correlation between
decisions and time gaps in the crossing compared to the judgement task only in the younger group.
Table 1. Percentage of decisions as a function of task, age and decision types.

Unsafe Missed
Decisions Opportunities
20-30 1.1 13
Crossing 60-70 2.3 14.9
70-80 1.4 21.8
20-30 6.3 17.8
Judgement 60-70 5.7 16.4
70-80 4.2 23

In sum, results of the younger group suggest that the


estimation task does not reliably represent road crossing
behavior and that the action component is essential to assess
road crossing in a valid manner. The findings furthermore
suggest that age affects the perception of action possibilities
in the actual-crossing situation, and that older adults may be
less likely to use feed-back from the crossing action to
calibrate the perception of the gap in terms of «crossability
of the road».

Figure 1. R² as a function of age and task

References
DeLucia, P. R., Bleckley, M. K., Meyer, L. E., & Bush, J. M. (2003). Judgments about collision in younger and older
drivers. Transportation Research Part F, 6, 63-80.
Observatoire National Interministériel de Sécurité Routière (2005). Les grandes données de l’accidentologie:
Caractéristiques et causes des accidents de la route. La documentation Française, Paris.
Oxley, J., Fildes, B., Ihsen, E., Charlton, J., & Day, R. (2005). Crossing roads safely: An experimental study of age
differences in gap selection by pedestrians. Accident Analysis and Prevention, 37, 962-971.
Snowden, R. J., & Kavanagh, E. (2006). Motion perception in the ageing visual system: Minimum motion, motion
coherence, and speed discrimination thresholds. Perception, 35, 9-24.
te Velde, A. F., Van Der Kamp, J., Barela, J. A., & Savelsbergh, G. J. P. (2005). Visual timing and adaptative behavior
in a road-crossing simulation study. Accident Analysis and Prevention, 37, 399-406.
3rd International Conference on Enactive Interfaces (Enactive /06) 173
Simulating a virtual target in depth through the invariant
relation between optics, acoustics and inertia

Bruno Mantel1, Luca Mion2, Benoît Bardy1, Federico Avanzini2 & Thomas Stoffregen3
1
Motor Efficiency and Deficiency Laboratory, University Montpellier 1, France
2
Department of Information Engineering, University of Padova, Italy
3
Human Factor Research Laboratory, University of Minnesota, USA
bruno.mantel@univ-montp1.fr

Behavior causes simultaneous changes in multiple forms of sensory stimulation, such as light,
sound, and pressure. Research on multimodal perception has tended to focus on the integration (in
the nervous system) of stimuli available to different sensory systems. However, potential sensory
stimulation is not limited to patterns available to individual sensory systems. There are also patterns
that extend across different forms of ambient energy. These superordinate, higher-order patterns
make up the global array (Stoffregen & Bardy, 2001). In principle, patterns in the global array
might be detected without sensitivity to lower-order patterns in individual forms of ambient energy.
If perceivers can detect information in the global array, then “sensory integration” would be
unnecessary. There has been little empirical research evaluating the hypothesis that perceivers
detect patterns in the global array.
Our aim it to evaluate perceptual sensitivity to the global array in the context of egocentric
distance perception. It involves (i) to formalize and manipulate how distance is specified across
optics, acoustics and inertia and (ii) to test whether it is perceived and used by humans.

Optical, acoustical and inertial dimensions


In monocular viewing, the perceiver has no access to interoccular information, such as
stereopsy or convergence. If virtual objects are presented through a head mounted display (HMD),
distance information will not exist in accommodation. If the object is not familiar, there is no
previous knowledge of its size. If the object is shown against a black background, there is no
distance information provided by the line of the horizon. Nor is there aerial perspective if the 3D
display model does not include it. Even motion parallax (or more exactly the relative apparent
motion of the static target generated by the perceiver’s movements) does not provide, on its own,
any information about egocentric distance, as it yields only angular position and its derivatives. On
the other hand, there is an invariant relation across optical and inertial energies that specifies
egocentric distance. In short, the optical apparent angular movements of the target are scaled in
terms of the inertial consequences of head movement.
Previous studies with such a design have revealed that perceivers perceive egocentric distance
accurately only when the intermodal relation between optics and inertia is preserved (Mantel, Bardy
& Stoffregen, 2005). Moreover, changing the gain in the intermodal relation shifts the perceived
distance accordingly. Mantel et al. recorded the optics generated by subjects’ movements, and used
these recordings as open-loop stimuli, such that there was no relation between head movements and
optical changes. In this condition, subjects were unable to perform the task, confirming that parallax
is not sufficient on its own.

In everyday life, our interactions with the environment alter stimulation of the auditory
system. For example, head movement influences patterns of binaural stimulation that are related to
the spatial location of sound sources (Wallach, 1940). The task of evaluating the sound direction is
accomplished by integrating cues for the perception of azimuth (i.e., angular direction in the
horizontal plane) with the spectral changes that occur with head movements creating the perception
of elevation (i.e., angular direction in the vertical plane). These auditory cues are produced by
physical effects of the diffraction of sound waves by the human torso, by the shoulders, the head
and outer ears (pinnae), which modify the spectrum of the sound that reaches the ear drums
(Blauert, 1997). In particular, in the case of sources within 1 m of the head, distance perception is
174
affected by additional range-dependent effects and binaural distance cues arise that are not present
for more distant sources. Also, an auditory motion parallax effect results from range-dependent
azimuth changes, so that for close sources a small shift causes a large azimuth change, while for
distant sources there is almost no azimuth change. All these cues can be captured by the head-
related transfer function (HRTF) that describes how a given sound wave is filtered by the physical
elements of the human body before the sound reaches the eardrum. Finally, effects of a reverberant
environment can also play a relevant role: early reflections and the change in proportion of reflected
to direct energy are important cues especially for distance estimation.

Virtual set-up
To simulate a static virtual target at a particular distance in front of the observer (that is,
beyond the two LCD screens of the HMD), head position and orientation are captured with a 6
degrees of freedom electromagnetic sensor (Ascension Technology's Flock of Bird) and used to
drive in real time the display of the target as well as its resulting sound. With such a design, the
target can be virtually located (both optically and acoustically) at any distance and along any
direction. The Flock of Bird is running at 100 Hz and has an intrinsic latency of 60 to 70 ms (in part
because of its built-in filters). The 3D display of the target is achieved using OpenGL (under C++)
by applying the recorded head motion to the virtual camera.
To produce convincing auditory azimuth and elevation effects, HRTF should be measured and
tuned separately for each single listener, which is both time consuming and expensive to implement.
For the above reasons, we used a simplified model (Brown, 1998) which assumes an approximate
spherical head for synthesizing binaural sound from a monaural source. The components of the
model have a one-to-one correspondence with the shoulders, head, and pinnae, with each
component accounting for a different temporal feature of the impulse response. The model is
parametrized to allow for individual variations in size of the head, and a reverberation section is
implemented to simulate the reverberant characteristics of a real room acoustically. This model
allows a flexible implementation using the real-time synthesis environment PD (Pure Data)1,7so
that the sound generation is performed externally of the visual simulation. Open Sound Control
(OSC) formatted messages are sent to PD via UDP sockets, packing six coordinates of the head (3
translational and 3 angular) in the egocentric reference system. The latency of the system is
negligible since the network communication runs with about 0.1 ms delay, while the latency of the
audio engine is of about 1.45 ms which is well below the latency of the motion tracking system.

We will use this set-up to test whether humans can judge the reachability of a static virtual
target that can be seen, that can be heard or that can be both seen and heard. We expect that only
perceivers that are allowed to move relative to the target will be able to perform the task, because
egocentric distance will only be specified in the invariant relation between optics and inertia or
acoustics and inertia (and of course between optics, acoustics and inertia).

References
Blauert, J. P. (1997). Spatial Hearing, rev. Ed. Cambridge, MA: MIT Press.
Brown, C. P., & Duda, R. O. (1998). A structural model for binaural sound synthesis. IEEE Transactions on Audio,
Speech and Language Processing, 6(5), 476-488.
Mantel, B., Bardy, B. G., & Stoffregen, T. A. (2005). Intermodal specification of egocentric distance in a target
reaching task. In H. Heft & K. L. Marsh (Eds.), Studies in Perception and Action VIII, (pp. 173-176). Mahwah,
NJ: Lawrence Erlbaum Associates, Inc., 2005.
Stoffregen, T. A., & Bardy, B. G. (2001). On specification and the senses. Behavioral and Brain Sciences, 24(2), 195-
261.
Wallach, H. (1940). The role of head movements and vestibular and visual cues in sound localization. Journal of
Experimental Psychology, 27, 339–368.

1
http://puredata.info/
3rd International Conference on Enactive Interfaces (Enactive /06) 175
Visual cue enhancement in the vicinity of the tangent point
can improve steering in bends

Franck Mars
Institut de Recherche en Communications et Cybernétique de Nantes (IRCCyN), France
franck.mars@irccyn.ec-nantes.fr

When approaching and negotiating a bend, the driver spends a significant amount of time at
looking in the vicinity of the tangent point (TP), i.e. the point where the direction of the inside edge
line seems to reverse from the driver’s viewpoint (Land & Lee, 1994). Several models proposed
different explanations of how and why this cue provides the driver with an input signal to guide
steering (Boer 1996, Land 1998, Salvucci & Gray 2004).
The general objective of the present experiment was to contribute to a better understanding of
the role of gaze positioning in the vicinity of the TP when negotiating a bend. The paradigm
required subjects to constantly look at a target spot while driving. The effects of gaze positioning on
several indicators of driving performance were analyzed and compared with a control condition
where gaze was unrestrained. The rationale of this paradigm was that the more the constrained gaze
would depart from the usual (and supposedly optimal) gaze positioning, the more steering would be
altered. If drivers learned to look in the direction of the TP in order to estimate the curvature of the
road (Land 1998), diverting their gaze from the TP could result in some modification to steering.
Similarly, if looking to the intended path is the usual and most efficient strategy (Boer 1996, Wilkie
& Wann 2005), restraining gaze position to the TP, or even more further inside the bend, could
cause some perturbation. Alternatively, if any salient far point moving along the road can be used to
control the trajectory (Salvucci & Gray 2004), no differences should be observed between
experimental conditions.

Figure 1. Left: Positions of the 5 target points. Right: Video frame showing the target point
positioned on the TP while negotiating a left bend.

Four female and 9 male drivers, between 20 and 57 years of age participated in the
experiment. The experiment was conducted using the fixed-base SIM2 simulator developed by the
MSIS laboratory (INRETS). The graphic database reproduced a real test track, which consisted of 4
straight lines and 14 bends. Six experimental conditions were repeated twice. In the control
condition, the participants were allowed to sample the visual scene as they wished. In the test trials,
they were required to keep looking at a target point that was displayed each time the vehicle
approached a bend. The target was either superimposed on the TP, or situated about the TP with a
lateral offset (Figure 1).
176
The results confirmed that looking in the vicinity of the TP is an efficient strategy for
controlling steering. Indeed, the drivers did not show any decline in driving performance in spite of
being forbidden to perform some varied sampling of the visual scene as they are used to. More
importantly, giving the driver some visual reference point increased control stability, both at the
trajectory and steering action levels as evidenced by the reduction of lateral position variability (not
shown) and number of steering reversals (Figure 2B), respectively (p<.05, in all cases). This
suggests that explicitly representing a reference point close to the TP may have enhanced the weight
and reliability of distant visual cues for steering control. This reference point does not need to be the
TP proper. Any salient visual cue that moves along the road and that can be tracked by the driver,
such as a lead car, is a good candidate for monitoring lateral control. This process can be modelled
by a simple proportional controller, as suggested by Salvucci & Gray (2004).

Figure 2. A. Average lateral deviation of the car from the lane centre. A positive value
represents a deviation toward the inside edge of the bend. B. Number of steering reversals
(right) compared to the control condition. Error bars represent S.E.M.

Manipulating the lateral position of gaze also significantly influenced the mean lateral
positioning of the car (Figure 2A, F4,48 = 7.58, p<.001). When required to look at the driving lane
(either at its centre or at an intermediate position between the lane centre and the TP), drivers cut
the bends in the same manner as in the control condition. By contrast, the more the driver’s gaze
was directed toward the inside of the bends, the more the trajectories deviated in the opposite
direction, that is toward the centre of the driving lane. This supports the hypothesis that looking at
the future path of travel, not the TP proper, is the strategy the driver effectively uses (Boer 1996;
Wann & Land, 2000).

The results of this study may be relevant for the development of a visual driving assistance.
Indeed, the next generation of head-up displays will offer the opportunity for wide field of view
driving aid by enhancing relevant features in the visual scene. This raises the question of which
visual cues should be made available in such displays. This study suggests that indicating a point
along the oncoming trajectory may be a simple and efficient way to make the control of the vehicle
easier.

References
Boer, E. R. (1996). Tangent point oriented curve negotiation. IEEE Proceedings of the Intelligent Vehicles '96
Symposium, Tokyo (Japan), Sep. 19-20, 1996. pp 7-12.
Land, M. F. (1998). The visual control of steering. In L.R. Harris & M. Jenkin (Eds.), Vision and Action (pp 163-180).
Cambridge: Cambridge University Press.
Land, M. F., & Lee, D. N. (1994). Where we look when we steer. Nature, 369, 742-744.
Salvucci, D. D., & Gray, R. (2004). A two-point visual control model of steering. Perception, 33, 1233-1248.
Wann, J., & Land, M. (2000). Steering with or without the flow: is the retrieval of heading necessary? Trends in
Cognitive Science, 4, 319-324.
Wilkie, R. M., & Wann, J. P. (2005). Judgments of path, not heading, guide locomotion. Journal of Experimental
Psychology: Human Perception and Performance, 32, 88-96.
3rd International Conference on Enactive Interfaces (Enactive /06) 177
Improving human movement recovery using qualitative analysis

Barbara Mazzarino1, Manuel J. Peinado², Ronan Boulic³,


Marcelo Wanderley4 & Antonio Camurri1
1
InfoMus Lab-DIST- Università degli Studi di Genova, Italy
2
University of Alcala, Spain
3
Ecole Ploytechnique Fédérale de Lausanne, Switzerland
4
McGill University, Canada
Barbara.Mazzarino@unige.it

This paper focuses on the use of qualitative analysis of human gesture for improving the
believability of recovered human movements from a small number of sensors. In particular, the
pilot study presented here compares expressive features extracted from a real human performance
and the rendered animations from two reconstructions based on the measured sensor data.

Introduction
The work here described is part of the research carried out in the framework of the European
Project EU-IST Network of Excellence ENACTIVE (Enactive Interfaces, strategic objective
Multimodal interfaces). The present study aims at identifying which expressive factors are critical
to explain the believability of reconstructed movements when displayed on virtual characters.
The underlying idea is to apply techniques previously developed by DIST for human gesture
analysis also to virtual characters. In this way, it is possible to evaluate the quality of the motion of
virtual characters by comparing with the corresponding real humans from which the motion has
been extracted, and thus to identify the key differences between the same motions performed by a
human and by the character.
The main objective of this work is to evaluate whether both the real and the virtual
movements convey the same qualitative and expressive information, and to compare different
rendering techniques with the aim of evaluating which method better maintains the high level
content (Camurri et al. 2005) conveyed by a specific motion. In addition we want to minimize the
number of sensors weared by the performer so that full-body motion capture can be integrated
within the interface of a broader range of applications.

Methodology and Results


A clarinet player was recorded during a performance, with one camera in lateral position.
Sensors were fixed on the musician’s body in order to accurately track body motion for the
subsequent 3D reconstruction, using an OptoTrack System, 100Hz sampling frequency. It is
important to notice that the sensors on the musician’s body were localized just on the right side of
the body, on the opposite side of the video camera.
Two main factors were identified as conveyor of unbelievably in the reconstructed motion: (i)
the occupation of the surrounding space with local unbelievable postures, and (ii) the low amount of
fluidity of the motion. In order to measure such two factors, first we extracted the quantity of
movement of the motion from all the movies. A snapshot of this work is visible in Figure 1.
The analysis showed that a too strict constraint was imposed on the center of mass position,
i.e. it was only allowed to move freely on the vertical axis passing through a mid-ankle point. This
led to the generation of a second set of reconstructed motions where the movement of the center of
mass was guided by the recorded instrument movement along the forward-backward direction (but
was prevented to project outside the supporting polygon). The feet constraints were also adjusted to
better reflect the musician posture.
For this study the fluidity was considered in two different ways:
- as a results of the segmentation of the motion (many fragment of the motion means an overall
reduction of the motion fluidity); in Table 1 (second column) it is possible to see an improvement of
this segmentation in the subsequent set of movies.
178
- as a measure of the agreement of the motion of different parts of the body. In this study we
considered the agreement between upper and lower parts of the body. This is a real time measure of
the fluidity and it is more related to the observer point of view. If it is present a relevant
disagreement between subparts motion, the ratio of the measured energy generates some peaks. The
higher the value of these peaks and the longer their duration, the more disagreement is present: this
means less fluidity. In Table 1 (last column) it is possible to see the major results of fluency
improvement.

Figure 1. Analysis of the initial part of the clarinet player movements: the focus is on the first
two gestures (the attack of the music performance). In the upper graph are represented the
motion bells extracted from the real-time algorithms and the real movie are represented.

Table 1. This table gives an overview of the results obtained from the analysis of fluency
Analysed movie Number of motion Average Value of Max value
phases Ratio
REAL 10 4.75 56.18
First set of movies Off Line Algorithm 17 31396.76 124,431.00
Real Time algorithm 16 1571.12 59,626.70
Second set of movies 2 Iteraction Worst case 13 19.21 1,423.22
3 Iteraction best case 13 18.06 776.17
10 Iteraction , off line 13 20.45 1,444.72
Last set of movies with Off Line Algorithm 12 5.52 195.02
postures correction Real Time algorithm 16 10.78 169.64

The output of the first analysis provided precious hints to the conceptors of the Inverse
Kinematics reconstruction technique for reducing the differences in quality. This led to propose an
improved balance constraint. The resulting set of animations has been again analyzed. This new
analysis demonstrated an increase in the quality of the virtual motion. A particularly significant
criteria is the fluidity cue of motion, that in the first set motion was reduced due to a disagreement
between motion of upper and lower body part. In the new set of motion this disagreement was
significantly reduced and the related reconstructed motions appeared more fluent. Future tests will
examine how to model, in the reconstructed human, also lateral movements of the center of mass to
reflect the observed weight transfer in real musician performances.
Using techniques developed for human gesture analysis, we showed how it is possible to
extract high-level motion features from reconstructed motion and to compare them with the same
features extracted from the corresponding real motions. Moreover, these features allow a qualitative
comparison between different rendering techniques. This resulted in a precious complementary tool
to believability studies based on analyzing solely viewer feedback through questionnaires.

References
Camurri, A., De Poli, G., Leman, M., & Volpe, G. (2005). Toward Communicating Expressiveness and Affect in
Multimodal Interactive Systems for Performing Art and Cultural Applications. IEEE Multimedia Magazine,
12(1), 43- 53, IEEE CS Press, 2005.
Peinado, M., Herbelin, B., Wanderley, M., Le Callennec, B., Boulic, R., Thalmann, D., & Méziat, D. (2004). Towards
Configurable Motion Capture with Prioritized Inverse Kinematics. Proceedings of International Workshop in
Virtual Rehabilitation (IWVR/SENSOR04), Lausanne Sept. 16-17th.
3rd International Conference on Enactive Interfaces (Enactive /06) 179
Enactive theorists oo it on purpose: Exploring an implicit demand
for a theory of goals

Marek McGann
Department of Psychology, Mary Immaculate College, University of Limerick, Ireland
Centre for Research in the Cognitive Sciences (COGS), University of Sussex, UK
marek.mcgann@mic.ul.ie

Introduction
The enactive approach to cognition advocates an active mind. Rather then being receptive and
responsive to events in our environments, we are engaged with, exploring and making sense of our
environment. This approach is saved from being either behaviouristic or eliminative by dealing in
norm-governed activity rather than simply «brute» mechanistic dynamics. The activity of a
cognitive system from an enactive view is not simply behaviour, it is action.

Though this view has been applied in a range of domains, the emphasis to date has been on
low-level cognitive phenomena examined using the tools of artificial life, biology and the
psychology of perception. If the approach is to be raised to examin more human, personal level
interactions with the world, then some work will need to be done in order to identify what kinds of
activity counts as action (and enaction) and what kinds of norms might be applied.

The kinds of norms and structuring of actions at the lower levels of cognition include
autopoiesis (Maturana & Varela, 1987; Thompson, in press) and simple forms of adaptivity
(DiPaolo, 2005), such as homeostasis, classical and operant conditioning. These norms can provide
us with some understanding of how the activity of a cognitive system is acting rather than simply
behaving, but they do not provide much insight into the kinds of actions and norms expressed by
people at higher, more personal levels of cognition. Nevertheless, is it clear that the enactive
approach is intended as a comprehensive approach to understanding the phenomenon of mind, from
the cellular level right up to encultured human activity.

Within the approach, we continually discuss actions, teleology, goal-directed behaviour, the
motivated nature of cognition and skilful engagement with the world. Discussion of such concepts
makes goals a fundamental pillar of the enactive approach, but there has been surprisingly little
explicit discussion of the concept. Just what kinds of goals are we talking about? What theory of
action (action simply and pre-theoretically understood as goal-directed behaviour) underlies the
enactive approach?

The problem of enactive goals


These are many approaches to goals and concomitant definitions of action extant in Cognitive
Science. The vast majority of these accounts are not available to the enactive approach, however,
depending as they do on representational descriptions of goals and computational or propositional
accounts of the practical reasoning that such goals structure. Such traditional approaches to
cognition and reasoning are normally eschewed by enactive theorists. The enactive approach must
be bolstered with a theory of goals, a theory of action, which can run the range of cognition from
cell to culture in the coherent and unified manner that is the approach's greatest strength.

A non-representational and non-propositional theory of goals provides something of a


challenge, given the basic phenomena such a theory would normally be expected to explain. In
order for something to be a goal for us, there firstly be some possibility of failure. That is, for it to
be something other than simply that which I am caused to do, it must at least conceivable that it not
occur. This places goals and actions in what Wilfred Sellars refers to as the «space of reasons». As
John McDowell (1996) explains it, for something to be in the space of reasons, we must have
180
justifications for it, rather than just exculpations. For there to be a possibility of failure, though,
there must be some criterion against which the action is judged, some means by which success or
failure are sensed or measured. For low-level cellular activity, continued living, the maintenance of
autopoiesis serves as a possible norm, the implicit and non-representational «goal» against which
success or failure can be measured. For higher level actions, such as convincing someone to lend
me money, controlling the centre of a chess board or driving home, autopoiesis is not so
immediately satisfactory. Some more detailed account is needed.

Related to the possibility of failure, is that of trying. In ordinary conversation and in


traditional, Cognitive Science, it is the presence of some continuing goal that allows us to identify
many disparate behaviours as being the same action – as serving the same goal (assumed or
explained as something represented to the agent, something they «have in mind»). Once again, an
enactive approach which refuses to use low-level representations as explanatory of psychology
demands some account of implicit but still potent goals to identify the correspondence between
what may be extremely variable, and possibly opposing or contradictory, behaviours.

Toward a resolution: Embodying Juarrero's dynamic systems account


I suggest that progress can be made on such a theory of goals and action by adapting the
dynamic systems theory put forward by Alica Juarrero (1999). Juarrero has put for a theory which
identifies our intentions and goals as attractors in the dynamic phase space of the brain. This
provides a means of understanding the implicit and situated nature of actions and their goals.
However, Juarrero's view is overly neuro-centric for the enactive approach, adopting a
connectionist but representational view of the meaning which transforms «mere»behaviour in to
action. In this paper I put forward some suggestions which would both allow us to adapt Juarrero's
theory for a more deeply embodied and enactive cognitive science, and provide a means of more
completely characterising an enactive psychology.

References
DiPaolo, E. (2005). Autopoiesis, adaptivity, teleology, agency Phenomenology and the Cognitive. Sciences, 4, 429-452.
Juarrero, A. (1999). Dynamics in action. Cambridge, Mass.: MIT Press.
Maturana, H. R., & Varela, F. J. (1987). The tree of knowledge: the biological roots of human understanding. Boston:
New Science Library.
McDowell, J. (1996). Mind and world. Cambridge, Mass.: Harvard University Press.
Thompson, E. (in press). Mind in life. Cambridge, Mass.: MIT Press.
3rd International Conference on Enactive Interfaces (Enactive /06) 181
Visual anticipation of road departure

Isabelle Milleville-Pennel1 & Aurélia Mahé2


1
CNRS and University of Nantes, IRCCyN, France
2
University of Nantes, France
Isabelle.Milleville-Pennel@irccyn.ec-nantes.fr

Theoretical framework
The aim of this study was to develop driving assistance device able to take control of the
vehicle if necessary and which favour the development of a good cooperation of the human-
machine interface. A major difficulty observed for the development of such systems is to clearly
determine when they must act in order to avoid an accident. If the system intervenes too early in the
driving process it can indeed entail for the driver the feeling of not being implicated in the driving
task, which can be dangerous. For this reason it is important to precisely define what are the
perceptive capacities and limits of the driver in order to adjust the driver support system for the
driver perceptive behaviour, in such a way that the driver support system fits into the sensory-motor
loops instead of interrupting it.
Our aim is thus to clearly define which visual information is used by drivers to determine
when they have to correct their trajectory to avoid a road departure. Different studies indicate that a
temporal information, the Time to Line Crossing (TLC), can be used by the driver. TLC
corresponds to the time necessary for the vehicle to reach one or other of the limits of the bend if
the driver does not modify the trajectory and speed. The Studies carried out by Van Winsum (Van
Winsum, de Waard & Brookhuis, 1999) revealed that TLC can be used for a continuous control of
the steering task. Such information could be a good indicator in order to determine when the driver
support system has to intervene in the steering task. Effectively, we could suppose that the system
intervenes as soon as the TLC passes below a threshold which corresponds to the minimum TLC
admitted by the driver for a safe driving. Nevertheless, it still has to be determined the best way to
compute this TLC. For this reason we compared two different ways to compute TLC each
corresponding to a particular zone of interest for the driver. The first corresponds to TLC in the
heading direction of the car (TLCdistal) and the second corresponds to TLC in the lateral direction of
the car (TLCproximal). Moreover we also investigate eye position while driving to determine if these
TLC are simply correlated to the driver behaviour or if they also correspond to visual information
directly used by drivers to correct their trajectory.

Method
We used a fixed base driving simulator (with the software Sim2, developed by the MSIS team,
INREST/LCPC). Different car trajectories were generated in bend (with radius from 50 to 500m)
and straight road sections. The trajectory is sometimes modified in order to entail a road departure if
not corrected (5mm/m, 7,5mm/m or 10mm/m deviation from the standard trajectory). 6 participants
(2 males and 4 females, 5 years average driving experience, aged between 21 and 28 years) have
been solicited to watch all the trajectories on a screen (visual angle of 80° wide and 66° high).
Participants could not actually drive the simulator to avoid to interfere with the deviation introduced
in the car trajectory but they were instructed to move the wheel as if they where driving and to
brake in order to stop the simulation when they felt that the vehicle heading could still comfortably
be corrected to prevent a crossing of the lane boundary. We computed TLC of the simulated car at
the moment of the brake. Two TLC where computed (cf. Figure1a), TLCproximal that corresponds to
the lateral distance (between the exterior part of the road and the right wheel of the car) divided by
lateral speed and TLCdistal that correspond to the distance between the centre of the front of the car
and the exterior part of the road divided by speed. We also measured eye movements thanks to the
IviewX system (SMI) in order to determine the particular zone of interest in the visual scene where
drivers look to assess when the trajectory of the car must be corrected.
182
Results and discussion
Results (cf. Figure 1b) indicate that contrary to TLCdistal, TLCproximal is correlated with vehicle
trajectory deviation. This suggests that this TLC is effectively a good indicator of driver’s safety
margin in the sense that it makes possible to determine when the driver usually intervenes to modify
a particular car trajectory and thus when driving assistance device must intervene to avoid accident.

a) b)
TLC distal: Distance/speed 0,18
TLCproximal: Lateral Distance 0,16
-0,5909

Latéral Speed (m/s)


Lateral speed 0,14 y = 0,2035x
0,12 R2= 0,6901
0,10
Distance: 0,08
y=ax+b 0,06
0,04
0,02
0,00
0,00 1,00 2,00 3,00 4,00 5,00 6,00 7,00 8,00
TLCproximal(s)

Figure 1. a)Computation of the two TLC b)Correlation between Lateral speed and TLCproximal.

Nevertheless, analysis of eye movements indicates that, just before feeling the necessity to
modify the trajectory of the car, drivers often look far away at the intersection of the heading and
the road side (cf. Figure 2). This visual position corresponds to an angle with the base formed by the
lateral distance between the middle of the front of the car and the exterior part of the road. Thus this
visual angle evolves exactly like lateral speed and must be correlated with TLCproximal. This result
indicates that while TLCproximal is correlated to the behaviour of the car it is perhaps not the visual
information that drivers use to regulate the position of the car on the road. This interesting result
will be discussed in the context of studies showing the implication of particular zone of interest in
the visual scene for the control of displacement (Wann & Land, 2000).

Eye position

Lateral
distance

Figure 2. Eye position on the road when drivers choose to correct the trajectory of the car.

References
Van Winsum, W. V., de Waard, D., & Brookhuis, K. A. (1999). Lane change maneuvers and safety margins.
Transportation Research Part F, 2, 139-149.
Wann, J. P., & Land, M. (2000). Steering with or without the flow: is the retrieval of heading necessary. Trends in
Cognitive Sciences, 4, 319-324.
3rd International Conference on Enactive Interfaces (Enactive /06) 183
Dynamics of visual perception during saccadic programming

Anna Montagnini & Eric Castet


Institut de Neurosciences Cognitives de la Méditerranée, CNRS, France
Anna.Montagnini@incm.cnrs-mrs.fr

Humans explore their visual surroundings by means of rapid shifts of the eyes, the saccades,
in order to align the higher acuity region of the retina, or fovea, with regions of interest in the scene.
The quality and the type of visual information that is processed by our visual system depends
fundamentally on the dynamic orientation of our gaze. In turn, the mechanisms to guide the
orientation of the eyes in the space rely on low-level processing of the visual input as well as on
higher level cognitive factors (e.g. the intention to look for something in particular). Such a strong
synergy between a high resolution optical system and the oculomotor system has evolved into the
efficient mechanisms of active vision in primates: as a whole, this system is a remarkable model
system to understand sensorimotor integration and enactive knowledge.
Visual stimuli presented during the preparation of a saccade are more accurately perceived
when their location coincides with the saccadic target. In other words, a perceptual target is more
efficiently processed when it is spatially coincident with the saccadic target as compared to when
the two targets (visual and saccadic) are displayed at different locations. This phenomenon has been
interpreted in terms of a tight link between oculomotor programming and the deployment of
selective visual attention.
However, several experimental studies have shown that a part of attentional resources can be
diverted from the saccadic target without a dramatic cost for saccadic accuracy (Kowler et al.,
1995). Our aim here is to spatio-temporally characterize this independent component of attention
during saccadic preparation. For this purpose, we designed the present experiment as a dual-task,
where the execution of a saccade constitutes the primary task, whereas an orientation discrimination
judgment constitutes the secondary task. In addition, the perceptual task (orientation discrimination)
was designed as a Posner-like paradigm (Posner, 1980), where the target location cue is only
probabilistically defined, i.e. it indicates the spatial location where the discrimination target will be
presented with a known probability but not with absolute certainty. More precisely, across blocks we
varied the probability (p) to present a perceptual target at the saccadic goal or opposite to it (1-p).
Thus, a situation of synergy (p=75%), neutrality (p=50%) or conflict (p=25%) arises between
saccadic programming and the voluntary, independent attentional component.
A previous study of our group (Castet et al., 2006) has shown that visual performance
improves dramatically at the saccadic target location during the first 150ms after cue onset. In order
to further investigate attentional modulation across time and generalize the previous results to other
locations in space, the perceptual performance was assessed at different moments during the phase
of saccadic preparation (∼200ms).

Methods
Observers fixated a dot displayed at the center of a circular array of eight disks (eccentricity
=6 deg, diameter=2 deg) containing white noise dots whose contrast randomly ranged from 0% to
±50% relative to the mean luminance of the gray display. After a random delay, the fixation point
was replaced by an arrow (saccadic target location cue) indicating the randomly selected direction
of the required oblique saccade (±45°; ±135°), so that the saccadic target was either the upward-left,
downward-left, upward-right or downward right disk. This arrow cue acted not only as a saccadic
location cue but also as a probabilistically-defined discrimination target location cue. For instance,
an arrow pointing upward-right within a block in the p=75% condition would instruct a saccade to
the upward-right circle and at the same time it would indicate that with 75% probability the
discrimination target will be presented at the upward-right circle location and only with 25%
probability will it be presented opposite to it, namely at the downward-left circle location. After a
variable delay from the arrow cue offset, all disks were replaced with Gabor patches (2-cycle/deg
184
gratings in cosine phase; seven of them were vertical, one tilted) for 25ms. The tilted patch
(discrimination target) could be presented in one of two relative locations with respect to the
saccadic cued location: same (relative angle =0 deg), or opposite (180 deg apart) to it. Subjects
were informed at the beginning of the experimental block about the validity condition, i.e. the
probability p that the discrimination patch would appear at the saccadic. A mask consisting of static
noise dots covering the whole screen immediately followed the Gabor patches for 37.5 ms, in order
to minimize visible persistence. After this, the initial array of eight noise disks was displayed again.
After the execution of the saccade, observers reported the orientation of the tilt with respect to
vertical by pressing one of two buttons. The button-pressing automatically initiated the next trial.
Stimuli were displayed on a 21-in. color monitor (GDM-F520, Sony, Japan) driven by a display
controller (Cambridge Research System Visage 2/3F, Cambridge, UK) with a 160-Hz frame rate
(frame duration = 6.25 ms). Eye movements were recorded with an Eyelink II infrared video-based
eyetracker (500Hz; resolution <0.1deg). Online control of saccade parameters was performed at
each trial and auditory feedback about imprecise saccades was immediately given. Data about
saccades as well as orientation judgment were stored and analyzed offline with MATLAB.
An adaptive staircase method was used to assess the orientation discrimination thresholds in
the perceptual task (orientation judgment, 2AFC, button press). We took the log-ratio of
discrimination thresholds (spatial specificity) at the saccadic goal and opposite to it as a measure of
the spatial specificity of attentional deployment.

Results
First, our results confirm the previous finding that orientation discrimination performance
improves dramatically (thresholds are reduced by a factor two) at the saccadic target location.
In addition, we found that the spatial distribution of attentional resources, as assessed by our
measure of spatial specificity was always (for all subjects and time delays) significantly modulated
by the cue validity. For the shortest delay after cue onset (6ms) the spatial specificity did not
significantly differ between our saccadic task and a control covert-attention task, which was
identical to the main task but no saccade was required. With the longest delay after cue onset
(150ms), though, a strong bias in favour of the saccadic target location was consistently observed.
Interestingly, mean saccadic latency was not significantly affected, across subjects, by the
cue-validity manipulation. In particular, the deployment of attention away from the saccadic target
(for instance in the p=25% condition) did not imply a cost in terms of saccadic latency.
Overall, our results suggest that a measurable independent component of attention can
strongly affect the spatial distribution of perceptual resources during saccadic programming. The
independent component is prominent very early during saccade preparation. It is only in the late
part of saccadic latency that the automatic influence of saccadic programming on perception
becomes apparent.

Conclusions
Our findings might help understanding the dynamic nature of motor-planning related effects
of visual perception. Eye movements are a small, fascinating system which is often taken as a
cartoon of the more complex action systems. Because it avoids the complex problems of dynamics
(i.e. torques, forces…) of, say, arm movements, it can give us precise and compact insights about
sensorimotor interactions. Overall, active vision has major potential applications in the fields on
artificial vision, robotics, active sensor control or embedded autonomic systems.

References
Castet, E., Jeanjean, S., Montagnini, A., Laugier, D., & Masson, G. S. (2006). Time-course of presaccadic modulation
of visual perception. Journal of Vision, 6(3), 196-212.
Kowler, E., Anderson, E., Dosher, B., & Blaser, E. (1995). The role of attention in the programming of saccades. Vision
Research, 35, 1897-1916.
Posner, M. (1980). Orienting of attention. Quarterly Journal of Experimental Psychology, 32, 3-25.
3rd International Conference on Enactive Interfaces (Enactive /06) 185
Motor performance as a reliable way for tracking face
validity of Virtual Environments

Antoine Morice1, Isabelle Siegler1 & Benoît Bardy2,3


1
Laboratoire Perception et Contrôle Moteur, UFR STAPS, University of Paris Sud XI, France
2
Motor Efficiency and Deficiency Laboratory, University Montpellier 1, France
3
Institut Universitaire de France, France
antoine.morice@staps.u-psud .fr

The intrinsic presence of temporal delay in Virtual Environments (VE) (we refer to the
concept of “end-to-end latency” (ETEL) (Adelstein et al., 1996)), as well as communication time
delay in tele-operation (Ferrel W.R., 1963) predictably leads to a degradation of user’s
performances. For engineers working on VE, ETEL is commonly associated to the concept of
spatio-temporal realism and requires technical solutions to overcome its undesirable consequences.
We believe that scientific people can partly respond to the engineers’ attempts, albeit being
concerned with more theroretical problems. For instance, the study of perception-action coupling
via the “moving room” paradigm suggested that an unusual matching between two sensory
modalities induced postural instabilities. This behavioral feature can be used to detect simulator
sickness (Stoffregen and Smart, Jr., 1998). In a similar way, the exposure to ETEL in VE leads to a
“sensory re-arrangement” (Welch, 1978) between several sensory modalities. Its motor and
perceptive consequences have to be studied.

An experimental ball-bouncing apparatus (cf. Figure 1A and B) was set up by using VE


technology. This VE allows the monitoring of human periodic behaviors when participants are
asked to manipulate a physical racket which control the displacement of a virtual racket, and this in
order to achieve regular virtual ball-bouncing. ETEL refers in this case to the temporal mismach
between the movement of a physical racket and its visual consequences (the displacement of virtual
racket).We design a psychophysics experiment to estimate the perception threshold of ETEL and its
damaging consequences on motor performance during a ball-bouncing task. The ETEL of our set-
up was manipulated to provide nine different values ranging from 30 to 190 ms (ETEL conditions).
Subjects performed rhythmical movements either to regularly bounce a virtual ball (With Ball
condition) or just for observing the visual consequences of their movements (No Ball condition) in
the nine ETEL conditions. After each trial, subjects had to verbally report whether both rackets
appeared synchronous or not. From logistic curves fitted to the participants’ answers (cf. Figure 1C
and D), the computation of the individual Point of Subjective Equality for 50% discrimination
likehood (PSE) revealed that bouncers become aware of ETEL from only 99 ms on average in the
NB condition. This discrimination threshold was lowered to 88 ms when regular collision with the
virtual ball was expected (WB condition). A paired test-t (N = 14, Diff. = 18.70, Ecart-type = 31.54,
t = 2.22, dl = 13, p = .04) revealed a significant difference in the PSE values between these two
sessions. When considering motor performances in WB condition, it appears that while mean values
of performances are significantly damaged above 110 ms, standard deviation values increases as
soon as ETEL increased.

Conclusions and recommendations to engineers can be drawn from this study. (1) The two
measured perception thresholds of ETEL are largely superior to the baseline ETEL value (30ms) of
our virtual environment. Consequently, realistically-perceived enactive interfaces can be designed
despite significant ETEL. (2) The difference found between the two psychophysic thresholds in WB
and NB conditions, evidences that participant judgments partially rely on ball-racket interactions to
discriminate ETEL. The earlier occurrence of ETEL perception in WB condition, regarding to NB
condition enhances the human ability to perceive spatio-temporal anomalies (O'Sullivan et al.,
2003), particularly when observing collisions. Reliable computation and rendering of collision in
VE seem then to require more effort than expand on ETEL shortening in the development of
186
realistically-perceived interface. (3) At last, perception of ETEL and performance damage do not
appear to overlap. The variability in motor performances stands out to be more accurate than
psychophysics threshold to evaluate the sensitivity of VE users to ETEL. This general result
suggests that a subjective “good” performance of a VE, when users have the feeling that the the real
movement is depicted in “real time”, does not guaranty the functional validity of the VE.

A B

C D

Figure 1. (A and B) Virtual Ball-Bouncing set-up. During WB session, subjects are asked to
periodically hit the ball to one given height (a target). With delayed visual feedbacks, the
manual control of the physical racket makes it possible to control the delayed displacement of a
virtual racket in order to interact periodically with a virtual ball. (C and D) Psychometric
function of all subjects for NB and WB sessions. Mean values of PSE (i.e. 50% ETEL
discrimination threshold) and r² are noticed in title.

References
Adelstein, B. D., Johnston, E. R., & Ellis, S. R. (1996). Dynamic response of electromagnetic spatial displacement
trackers. Presence-Teleoperators and Virtual Environments, 5, 302-318.
Ferrel, W. R. (1963). Remote manipulative control with transmission delay. Perceptual and Motor Skills, 20, 1070-
1072.
O'Sullivan, C., Dingliana, J., Giang, T., & Kaiser, M. K. (2003). Evaluating the Visual Fidelity of Physically Based
Animations. Proceedings of Special Interest Group on Graphics and Interactive Techniques, (SIGGRAPH'03)
(22)(3). ACM Transactions on Graphics.
Stoffregen, T. A., & Smart L. J., Jr. (1998). Postural instability precedes motion sickness. Brain Research Bulletin, 47,
437-448.
Welch, R. B. (1978). Perceptual modification : Adpating to Virtual environment. Academic Press, New York.
3rd International Conference on Enactive Interfaces (Enactive /06) 187
Evaluation of a motor priming device to assist car drivers

Jordan Navarro, Frank Mars & Jean-Michel Hoc


IRCCyN, CNRS and University of Nantes, France
Jordan.Navarro@irccyn.ec-nantes.fr

Introduction
Automatic devices applied to lateral control developed in order to support drivers for safety
and comfort reasons could be categorized in two classes. The first class of driving assistance warns
the driver when a certain level of risk is reached (Lane Departure Warning Systems: LDWS)
whereas systems who belong to the second class actively contribute to steering by applying some
torque on the wheel in order to bring the car back into the lane (Lane Keeping Assistance Systems:
LKAS). In terms of human-machine cooperation all systems presented perform mutual control (Hoc
& Blosseville, 2003). LDWS are assumed to improve situation diagnosis, but interfere in no way
with actual steering. On the other hand, LKAS intervene at the action level. In other words, they are
designed to blend with the driver’s sensorimotor control processes.
The present work compares haptic and auditory warning (both LDWS) to a new way of
prompting the driver to take action via the haptic modality. This assistance called “motor priming”
can be described as a directional stimulation of the hands, which consists of an asymmetric
vibration of the wheel. More precisely, the wheel oscillates with one direction of the oscillation
being stronger than the other. This gives the impression that the wheel vibrates and “pushes” lightly
in the direction where the corrective manoeuvre must be performed. This is not a LKAS proper, in
the sense that its contribution to steering is minimal, but it provides some motor priming in addition
to warning. Thus, it can be considered as a driving assistance at the boundary between LDWS and
LKAS.
Suzuki and Jansson (2003) compared auditory warning (monaural or stereo) and vibratory
warning to another assistance, which was similar to the motor priming system since it delivered
steering torque pulses to the driver. When subjects were uninformed about the way the pulse-like
system worked, its effect on steering was associated to large individual differences, some subject
counteracting the assistance and turning the steering wheel in the wrong direction. This study argue
in favour of a direct intervention of the motor priming mode on motor control even though this
sometimes negatively interferes with some drivers’ steering.
The main objective of the present experiment was to determine in a controlled simulator
setting whether or not motor priming can be achieved without negative interference and if there is
some benefit from it compared to more traditional auditory or vibratory warning devices. A
secondary objective was to assess the possible advantage of using multimodal information for
LDWS. Indeed, redundant information presented simultaneously in different modalities has been
proven useful in various different tasks (Spence & Driver, 2004).

Method
Twenty participants (2 females and 18 males, 25 years old on average) took part in the
experiment. Participants drove one lap of a simulated test track, 3.4 km long, in each of 20 trials.
Five driving assistances, inspired by systems that were developed by LIVIC (INRETS/LCPC
laboratory, Satory, France), were implemented in a fixed-base simulator by MSIS. All devices
entered into action when the centre of the vehicle deviated more than 80 cm from the lane centre.
• The Auditory Warning mode (AW) was delivered by one of two loudspeakers (the one in the
direction of lane departure). The emitted sound was similar to a rumble strip noise.
• The Vibratory Warning mode (VW) was generated by a regular rectangular oscillation of the
steering wheel.
• The Motor Priming mode (MP) was generated by an asymmetrical triangular oscillations on
the steering wheel.
• The Auditory and Vibratory Warning mode (AVW) was the combination of AW and VW.
188
• The Auditory and Motor Priming mode (AMP) was the combination of AW and MP.
• Finally a condition Without Assistance (WA) was used as control condition.

Unpredictable visual occlusions occurred during car driving and caused a lane departure to the
left or to the right of the driving lane. Duration of lateral excursion, reaction time, steering
amplitude and maximum lateral excursion were computed for all lane departure episodes. The level
of significance of p<0.05 was used in all tests.

Results

A) Duration of lateral excurtion (s) B) Maximum rate of steering wheel acceleration (°/s²)

2 2,5
1,8
1,6 2
1,4
1,2 1,5
1
0,8 1
0,6
0,4 0,5
0,2
0 0
WA AW VW MP AVW AMP WA AW VW MP AVW AMP

Figure 1. Duration of lateral excursion (A) and maximum rate of steering wheel acceleration
(B) without assistance and for each driving assistance type. Error bars represent one standard
error.

All driving assistances significantly improve driving performance by reducing the duration of
lateral excursion (i.e. the time drivers spend outside a safe lane envelop; on the average t(19)=10.9,
p<0.001; see Fig.1A). No significant difference was observed between AW, VW and AVW on the
one hand, and MP and AMP on the other hand (p>0.05). Moreover, MP and AMP were
significantly more effective than the other systems (p<0.001).
All driving assistances significantly increased the maximum rate of steering wheel
acceleration (which correspond to the response sharpness accorded by drivers; on the average
t(19)=14.8, p<0.001; see Fig.1B). Once again MP and AMP were more effective than the other
systems (p<0.001).

Discussion
The motor priming modes (with or without added auditory warning) gave rise to faster
maneuvers than warning modes. This supports the hypothesis that providing directional cues at the
motor level is more efficient for steering assistance than giving some information that requires some
treatments by higher level cognitive processes (as traditional LDWS does).
Also, no performance improvement was observed when motor priming or vibratory warning
were combined with auditory warning, which goes against the idea that multimodal directional
information may improve driving assistance to lateral control.

References
Hoc, J. M., & Blosseville, J. M. (2003). Cooperation between drivers and in-car automatic driving assistance.
Proceedings of Conference on Cognitive Science Approaches to Process Control (CSAPC’03), pp. 17-22,
Amsterdam, The Netherlands.
Spence, C., & Driver, J. (2004). Crossmodal space and crossmodal attention. New York: Oxford University Press.
Suzuki, K., & Jansson, H. (2003). An analysis of driver’s steering behaviour during auditory or haptic warnings for the
designing of lane departure warning system. Japan Society of Automotive Engineers Review, 24, 65-70.
3rd International Conference on Enactive Interfaces (Enactive /06) 189
Some experimental data on oculomotor intentionality

Gérard Olivier
Laboratoire de Psychologie Expérimentale (EA 1189), University of Nice-Sophia Antipolis, France
olivierg@unice.fr

This study concerns the contribution of mentally simulated ocular exploration to generation of
a visual mental image (e.g., Olivier & Juan de Mendoza, 2001). The purpose of the two following
experiments was to demonstrate that during the experience of mental imagery, the so called “visual
image”, supposed to represent in the mind an object of the external world, corresponds in reality to
the oculomotor preparation of imitating this external object visually.

Experiment 1
In Experiment 1, repeated exploration of the outlines of an irregular decagon allowed an
incidental learning of the shape. The instruction was to move the eyes quickly along the polygon
before the drawing disappeared from the screen and to state in a loud voice the number of spots
contained in targets situated on the contours. In this experiment the independent variable is the
scanpath formed by the set of saccades jumping from target to target. The subjects were equally
divided into two groups. The polygons presented to the control group contained a target at each
angle (see Fig. 1; left). Given the time constraint, the scanpath of the control group should alternate
brakes on a target to count the number of spots and direct saccades to the next target. In these
conditions, as all the targets are in the angles, the scanpath of the control group subjects imitate the
contours of the polygon. The experimental group visually explored polygons in which two targets
were situated in the middle of one side (see Figure 1; right). Directly jumping from one target to the
next one, the scanpath should ‘cut” two right angles of the polygon, one on the lower right corner
and a second one on the upper left part of the figure. The so-formed scanpath is one of the four
polygons proposed for recognition in a second part of the experiment (see Figure 1; middle). When
asked to recognize the visually explored shape, subjects resorted to visual imagery and compared
the mental image they experienced with the different polygons proposed for recognition. Results
showed that the subjects of the control group recognized the right answer. Conversely, the subjects
of the experimental group did not recognize the shape they had explored but chose the polygon
corresponding to their ocular scanpath. In other words, there was confusion in their mind between
the object and their ocular behaviour upon the object.

Figure 1. Polygon presented to control group (left) and to experimental group (right). In the
middle, the 4 polygons proposed for recognition.

Experiment 2
In Experiment 2, repeated exploration of a reversible figure such as a Necker cube (see figure
2, left) varied in opposite directions: either left to right or right to left. Then, both perspective
possibilities were presented for recognition (see fig.2, right). As in experiment 1, during recognition
190
the subjects recalled a visual mental image of the polygon that they compared with the different
polygons proposed for recognition. The perspective that the subjects recognized depended on the
way they had explored the ambiguous figure in the first part of the experiment. In other words, the
phenomenal image that the subjects experienced was determined by their past ocular behaviour.

Figure 2. The visually explored reversible figure (left) and the two perspective possibilities
proposed for recognition (right: note that none of them corresponds to the right answer as
subjects explored the ambiguous figure).

These data show that mental image is in fact a covert behaviour, that is, the anticipation of the
behaviour that would permit imitating the object bodily (Piaget & Inhelder, 1966). More generally,
the data suggest that the external world is not represented through mental copies of its objects, but
enacted (Varela, 1989), that is, assimilated to potential sensorimotor behaviours (Piaget, 1963). In
conclusion, the oculomotor images experienced by the subjects during both experiments may
illustrate what Merleau-Ponty (1945) called “motor intentionality”.

References
Merleau-Ponty, M. (1945). Phénoménologie de la perception. Gallimard, Paris.
Olivier, G., & Juan de Mendoza, J. L. (2001). Generation of oculomotor images during tasks requiring visual
recognition of polygons. Perceptual and Motor Skills, 92, 1233-1247.
Piaget, J. (1936). La naissance de l’intelligence chez l’enfant. Delacheau et Nestlé, Neuchatel.
Piaget, J., & Inhelder, B. (1966). L’image mentale chez l’enfant. PUF, France.
Varela, F. (1989). Autonomie et connaissance. Editions du Seuil, Paris.
3rd International Conference on Enactive Interfaces (Enactive /06) 191
Bodily simulated localization of an object during a perceptual decision task

Gérard Olivier & Sylvane Faure


Laboratoire de Psychologie Expérimentale (EA 1189), University of Nice-Sophia Antipolis, France
olivierg@unice.fr

This experiment deals with the mental simulation of manual movement that sometimes
accompanies the visual perception of an object. Previous studies investigated how visual objects
prime the grasping manual movement they afford (e.g., Craighero et al., 1996; Tucker & Ellis,
2001). The purpose of the present experiment was to determine if such visuomotor priming also
concerns the reaching manual movement that visual objects afford. Instead of varying the intrinsic
properties of the visual stimulus and the response device, their extrinsic properties were varied; in
particular, the distance between the response devices and the subject was manipulated. In this case
it was not expected that a motor compatibility effect of the executed and simulated grasping
movements would be observed, but rather that a motor compatibility effect of the executed and
simulated reaching movements would be obtained. In other words, the purpose was to show that the
visual perception of an object may also be linked to the mental simulation of a manual movement,
thus allowing the subjects' hand to reach the object's spatial position. Consequently, the following
operational hypothesis was made: during a perceptual judgement task, an interaction effect on the
reaction times (RTs) should be observed between the manual response (proximal or distal) and the
stimulus position (proximal or distal). More precisely, the visual perception of a proximal stimulus
should lead to shorter reaction times when it is followed by the motor preparation of a proximal
response, compared to when it is followed by the motor preparation of a distal response.
Conversely, the visual perception of a distal stimulus should lead to shorter reaction times when it is
followed by the motor preparation of a distal response, as opposed to when it is followed by the
motor preparation of a proximal response.

Procedure
To test this purpose, the researchers asked 32 students to perform a perceptual judgment task.
It consisted of executing two different manual responses (to grasp a proximal or a distal switch)
depending on the colour of the stimulus displayed on a computer screen (see Figure 1). Photos
representing chess pieces (bishops) were laid out on a chessboard and used as the stimuli.

Figure 1. The two reaching and grasping movements executed on the response device (upper
raw). On the left, the starting position. The timer recording the RTs stops when the subject’s
finger ceases pressing on the start button, while the subject is executing either of the two
possible responses. In the middle, the proximal response executed on the nearest switch. On the
right, the distal response executed on the farthest switch. The two kinds of stimuli (lower raw).
On the left, example of a proximal stimulus (bishop placed on square D2). On the right,
example of a distal stimulus (bishop placed on square E7). Black frames did not appear on the
actual stimuli, but have been added here to visualize the limits of the proximal and distal areas.
192
Subjects were assigned to two groups given different instructions. Those in Group A had to
respond with the proximal switch when the bishop was white and with the distal switch when it was
black. For those in Group B, the instructions were reversed. The experimental plan was as follows:
P16 * I2<R2*S2>, with P for participants, R for manual response (proximal or distal), S for
stimulus position on the chessboard (proximal or distal) and I for instruction.

Results
An analysis of variance (ANOVA) was performed on mean RTs. No main effect was
significant. Reaction times (M = 461 ms, SD = 122) did not significantly vary either with manual
response, or with stimulus position, or with instruction. As shown in Figure 2, the interaction
Response by Stimulus position was significant (F(1,30) = 8.90, p’s <.01). More precisely, when the
subject perceived a proximal stimulus, the reaction was faster when the subject prepared to grasp
the proximal switch than when the subject prepared to grasp the distal switch, F(1,30) = 7.53,
p’s<.02. Conversely, when the subject perceived a distal stimulus his reaction was faster when he
prepared to grasp the distal switch than when he prepared to grasp the proximal switch, F(1,30) =
5.19, p’s<.05.

Figure 2. Mean reaction times for proximal and distal stimuli positions as a function of
proximal and distal responses.

Discussion
The compatibility effect of the prepared and simulated reaching movements observed in the
present experiment reinforces the hypothesis that the spatial properties of an object are closely
linked to the motor preparation of a body movement by the person that perceives this object (Piaget,
1945). This idea was initially expressed by Poincaré (1905) when he considered that, to localize a
point in space, one just needs to imagine the body movements that would permit oneself to reach it.
In conclusion, this study demonstrated that the mental simulation of a hand movement primed by
visual perception of an object is not only limited to the grasping component of the manual
movement but also concerns its reaching component, and thus permits the subject to anticipate the
kinetic transportation of the hand to the place where the perceived object is located. More generally,
by showing that a “bodily simulated localization” of the object may complete its visual perception,
this experiment confirms that an important function of the brain is to simulate behaviours,
consequently raising the issue of the precise nature of the links between perception and action.

References
Craighero, L., Fadiga, L., Rizzolatti, G., & Umilta, C. (1996). Evidence for visuomotor priming effect. NeuroReport, 8,
347–349.
Piaget, J. (1945). La formation du symbole chez l'enfant. Delachaux et Niestle, Neuchâtel.
Poincaré, J. H. (1905/1970). La valeur de la science. Flammarion, Paris.
Tucker, M., & Ellis, R. (2001). The potentiation of grasp types during visual object categorization. Visual Cognition, 8,
769–800.
3rd International Conference on Enactive Interfaces (Enactive /06) 193
Inscribing the user’s experience to enact development

Magali Ollagnier-Beldame
LIRIS laboratory, University of Lyon 1, France
Erté e-praxis, Institut National de la Recherche pédagogique, France
mbeldame@liris.cnrs.fr

Epistemic position
Development, as Vygotski (1978) suggested, is the process by which a person transforms
himself/herself through an activity, in a socio-techno-cultural context. According to this socio-
constructivist psychologist, learning is prior to development. Indeed, this approach considers that
development occurs during and through activities, and is not linked to “developmental stages” of an
individual person. As theorists of enaction, Maturana and Varela (1987) argued that cognition is a
concrete action, and that every action is cognition. We agree with this assumption. We adopt an
externalist approach to cognition, from situated and distributed cognition theories (Suchman, 1987;
Hutchins, 1995, 2005). According to this externalist approach to cognition, we state that observing
cognition as a dynamic process is not simple. We also think that development must be “in motion”
to be studied, that is, to be in the very act of emerging from interactions between the user and the
system. But activity cannot be directly observed and studied. Though, activity traces can provide
information about the core of activity. To conduct such analyses, we use an ethnomethodological
method in our research.
We can formulate two specific questions for this work. First, we suppose that observing both
the use of visible interaction traces in a collaborative activity and the appropriation of the technical
environment will lead us to better understand the possible role of visible traces in human-computer
interactions. Second, we hope that watching attentively to the activity distribution between humans
and non-humans (Latour, 1989) and to the position of the use of experience via traces will point out
the situated, distributed and opportunist natures of activity we study.

Concrete situation…
We study the activity of two distant mediated learners who use a computer and work by peer.
They design operating instructions for a procedural task, an origami paper-folding. Instructions they
have to co-write must allow someone to realize the origami. For that activity, they make use of a
computer offering a three areas interface, proposing: a personal video showing action of real hands
folding a sheet of paper, and two shared areas: a chat and a text editor. Instructions have to be
designed in the text editor. The two shared areas afford traces of what has been done, providing for
the co-writers immediate footprints of activity.
With this testing, the first goal is to observe the activity of mediated distant co-writing taking
place and the possible self-regulation induced by experience traces from shared areas. More widely,
we also want to be present at how the cognition is distributed through the different areas of the
interface for this particular task.
All subjects’ actions on computer are recorded: chat interactions, edited text, video plays. We
have transcribed interactions, between co-writers, occurring in discursive areas of the environment.
A piece of transcription is illustrated below.

temps chat privé Rastaban chat publié chat privé Yildun éditeur de textes
0:17:02 Y58 en 4 alors Y58b publication
R59a non car il fo deplie
avan

0:17:18 R59b publication R59 non car il fo deplie


avan
E2-R60a puis m
194
Based on the transcription, we made a fine-featured analysis of interactions, which aimed at
considering and testing the two specific questions exposed above. So, for analyses, we had no
“hard” hypotheses, but rather expectations of what can happen and be observed. As indicators, we
attached importance to precise operations, their content and areas where they “happened”. All that
was examined according to an interactional perspective, that is, considering one operation and what
follows from participants, and not only the isolated operation.

… And theoretical groundings


In that collaborative mediated situation, we show users numerical traces of their interactions
with the computer. These traces are inscriptions of their experience, as a phenomenological way.
This experience becomes tangible to users by visible history at the interface. These traces are
resources for their activity, and sources to configure it. We argue that presenting users their
interactional history with the technical device is a way to support development.
Learners we observe create and design things. They are responsible for some world’s
transformations. Indeed, we adopt an enactive approach of experience (Varela, 1993). We argue
that learners enact the content of possible experiences; they are instigators of it (Noë, 2006). More
fundamentally, showing users records of their interactions with the computer, traces of their
experience is a way to enact experience, and a phenomenological approach for studying the
situation.

Results and short discussion


This experimentation and its analysis have led us to underline interesting events in the use of
traces, with ethnomethodological tools. We have shown that traces could be part of a meta activity,
could participate to an instrumental genesis and appropriation of the system. We have also proposed
a typology of interaction traces and their use, according to different properties linked to the user and
the system area originating traces (ownership, addressing, lability, consulting or operating use). For
more details, see (Ollagnier-Beldame, 2006).
More widely, analyses of our testing show the opportunist and the situated natures of the activity of
co-writing. The ethnomethodological analysis indicates moments making sense for inter-actors, and
points of signification stabilisation. Then, this is very interesting to trace the origins of
stabilisations, and to show how signification is negotiated, between humans and non-humans
(Latour, 1989).

References
Hutchins, E. (1995). Cognition in the Wild. Cambridge (MA): MIT Press.
Hutchins, E. (2005). Material anchors for conceptual blends. Journal of Pragmatics, 37(10), 1555-1577.
Latour, B. (1989). La Science en action. Paris, La Découverte.
Maturana, H. R., & Varela, F. J. (1987). The tree of knowledge: The biological roots of human understanding. New
Science Library: Boston.
Noë, A. (2006). Art as enaction. In Interdiscipline Online Symposium on Art and Cogntion. Translated from English by
Anne-Marie Varigault.
Ollagnier-Beldame, M. (2006). Traces d’interactions et processus cognitifs en activité conjointe : le cas d’une co-
rédaction médiée par un artefact numérique. Thèse de doctorat en sciences cognitives, Université Lyon 2 (à
paraître).
Suchman, L. (1987). Plans and Situated Actions: The problem of human-machine communication. Cambridge:
Cambridge University Press.
Varela, F., Thompson, E., & Rosch, E. (1991). The Embodied Mind: Cognitive Science and Human Experience.
Cambridge : Massachussets Institute Press. Traduction française : L'inscription corporelle de l'esprit. Sciences
cognitives et expérience humaine. V. Havelange. Paris : Éditions du Seuil.
Vygotsky, L. (1978). Mind in society. The development of higher psychological process. Cambridge & London:
Harvard University Press.
3rd International Conference on Enactive Interfaces (Enactive /06) 195
Enactive perception and adaptive action in the embodied self

Adele Pacini, Phil Barnard & Tim Dalgeleish


Medical Research Council, Cognition & Brain Sciences Unit, UK
Adele.Pacini@mrc-cbu.cam.ac.uk

The work of Gibson (1979) provides a valuable perspective to enactive perception. Central to
this work, is the notion of object affordances. The affordances of an object are defined as “what it
offers the animal, what it provides or furnishes, either for good or ill.” (p127). In essence, it is
suggested that the perception of an object provides direct information to the observer regarding the
potential actions that can be carried out with it. At a neuroscientific level, the implications are that
the perception of an object directly activates motoric information related to interacting with it. This
provides the necessary mechanism for enactive perception. Furthermore, the neuroscientific finding
of canonical neurons by Rizzolatti and his team (1988) suggests that there is a neural level of
support for this mechanism. However, whilst these canonical neurons can reasonably be judged as
necessary for enactive perception, they are unlikely to be sufficient.

In the present studies, we investigate the scope of affordances to provide a bridge between
perception and action. In particular, we assess whether the perception of an object necessarily
results in the cognitive utilization of action schemas. In this sense, we further examine how far the
neural premise of object centred affordances can be abstracted to higher level cognitive functions.
The task presents words rather than physical objects, and actions are in relation to a virtual rather
than a physical self. In a paradigm developed by Brendl, Markman & Messner (2005), we present
the participant’s name in the centre of a computer screen, this represents “the self”, a positive or
negative word is presented either to the left or the right of that name. In the adaptive condition they
are asked to move the word towards their name if they like the word, and move it away if they
dislike it. In the maladaptive condition they are asked to move the word towards their name if they
dislike it and away from their name if they like it. Figure 1 presents the main finding, namely
participant’s are faster to respond to the stimuli in the adaptive than in the maladaptive condition.

1050

1000

950

900
RT (msec)
850

800

750

700

650

600
Positive Negative
Word valence

Positive Towards-Negative Away Positive Away-Negative Towards

Figure 1. The time taken to move valenced stimuli for both adaptive and maladaptive actions.

This finding suggests that the act of evaluating a stimulus is an enactive process, whereby
adaptive action information is utilised in cognitive processing. Where the task demands a
maladaptive action, this results in a delay of around 200msec. However, there are a couple of issues
with this study. Firstly, it is unclear whether adaptive action schemas are present, or whether this
task represents a top-down effect whereby “like-towards” is easier to process than “like-away”.
Theoretically, these action schemas should become active without any higher level cognitive
evaluation, we assess this possibility in study 2. This design has the additional advantage of
removing the top-down confound of study 1.
196
Enactive perception without cognitive processing
In the second study we present the same paradigm, but participants are asked to move the
stimuli based on perceptual factors. Specifically, they are asked to move the word towards their
name if it is in uppercase letters, and away from their name if it is in lowercase letters. This
instruction set was reversed for a second group of participants. Figure 2 presents the main finding
from this study, namely that the mere act of perceiving a stimuli, without any cognitive processing,
activates action schemas, when the action to be performed has threat related consequences for the
self.

800

750

700

650

600
Positive Negative

UCT (Positive)-LCA (Negative) (congruent)UCT (Negative) -LCA (Positive) (Incongruent)


UCA (Negative) -LCT (Positive) (Congruent)
UCA (Positive)-LCT (Negative) (Incongruent)

Figure 2. The time taken to move valenced stimuli, based on perceptual factors, for both
adaptive and maladaptive actions.

This asymmetry between positive and negative stimuli, suggests that the proposed neural level
of enactive perception may not always be utilised in higher level cognitive processing. Furthermore,
the utilisation of this information is not simply related to the depth of processing occurring on the
stimuli (as proposed by Solomon & Barsalou, 2004). Rather on a more ecological basis, where
stimuli that have threat related consequences to the self is given priority.

Conclusions
To an extent, these findings do support an enactive theory of knowledge representation.
However, questions remain as to how far neural frameworks of enactive knowledge hold
explanatory power for higher level cognitive processing. Furthermore, the difference in effect size
between study 1 and 2 suggests that there was an element of top down conflict between like-away
or dislike-towards. This points to a need for cognitive psychology paradigms to ensure that the
relationship between body and mind has not been established by task design. However, the real
challenge for enactive or embodied approaches to cognition, is not simply establishing that modal
information is activated in cognitive processing, but elucidating the mechanisms by which lower
level sources of embodiment may or may not be integrated in to higher level cognitive processing.

References
Brendl, C. M., Markman, A. B., & Messner, C. (2005). Indirectly measuring evaluations of several attitude objects in
relation to a neutral reference point. Journal of Experimental Social Psychology, 41, 346-368.
Gibson, J. J. (1979). The ecological approach to visual perception. London: Erlbaum.
Rizzolatti, G., Camarda, R., Fogassi, L., Gentilucci, M., Luppino, G., & Matelli, M. (1988). Functional organisation of
inferior area 6 in the macaque monkey: II. Area F5 and the control of distal movements. Experimental Brain
Research, 71, 491-507.
Solomon, K. O., & Barsalou, L. W. (2004). Perceptual simulation in property verification. Memory and Cognition,
32(2), 244-259.
3rd International Conference on Enactive Interfaces (Enactive /06) 197
Presence and Interaction in mixed realities

George Papagiannakis, Arjan Egges & Nadia Magnenat-Thalmann


MIRALab, University of Geneva, Switzerland
Papagiannakis@miralab.unige.ch

In this paper, we present a simple and robust Mixed Reality (MR) framework that allows for
real-time interaction with Virtual Humans in real and virtual environments under consistent
illumination. We will look at three crucial parts of this interface system: interaction, animation and
global illumination of virtual humans for an integrated and enhanced presence and believability.
The interaction system comprises of a dialogue module, which is interfaced with a speech
recognition and synthesis system. Next to speech output, the dialogue system generates face and
body motions, which are in turn managed by the virtual human animation layer. Our fast animation
engine can handle various types of motions, such as normal keyframe animations, or motions that
are generated on-the-fly by adapting previously recorded clips. All these different motions are
generated and blended on-line, resulting in a flexible and realistic animation. Our robust rendering
method operates in accordance with the previous animation layer, based on an extended for virtual
humans Precomputed Radiance Transfer (PRT) illumination model (Sloan et al., 2002), resulting in
a realistic display of such interactive virtual characters in mixed reality environments. Finally, we
present a scenario that illustrates the interplay and application of our methods, glued under a unique
framework (Papagiannakis et al., 2005) for presence and interaction in MR.

Our methodology
Over the last few years, many different systems have been developed to simulate scenarios
with interactive virtual humans in virtual environments in real-time. The control of these virtual
humans in such a system is a widely researched area, where many different types of problems are
addressed, related to animation, speech, deformation, and interaction, to name a few research topics.
The scope of applications for such systems is vast, ranging from virtual training or cultural heritage
to virtual rehabilitation. Although there are a variety of systems available with many different
features, we are still a long way from a completely integrated system that is adaptable for many
types of applications. This is not only because of the amount of effort that is required to integrate
different pieces of software, but also because of the real-time constraint. The latter becomes
especially an issue when many different components need to work together. The true challenge of
such systems is for a person to be able to naturally interact with the virtual humans in the virtual as
well as in the real scene, for example by using a speech recognition interface. A lot of research has
been done to develop chatbots that are mostly focused on a simple stand-alone application with a
cartoon face (Alice, 2006), however only a few systems succeed to link interaction with controlling
3D face and/or body motions played in synchrony with text-to-speech (Cassell et al., 2001;
Hartmann et al., 2002; Kopp et al., 2004). The main problem with such systems is that they are far
from ready to be used in Mixed Reality applications, which are much more demanding than a stand-
alone application. For example, a highly flexible animation engine is required that not only plays
animations in combination with interaction, but that is also capable of playing simple key-frame
animations as part of a predefined scenario. This also means that a dynamic mechanism is required
that allows to switch between playing key-frame animations as part of a scenario, and animations
related to the interaction (such as body gestures and facial expressions) without interrupting the
animation cycle. Furthermore, Mixed Realities require a more elaborate rendering method for the
virtual humans in the scene that uses the global illumination information, so that virtual humans that
augment a reality have lighting conditions that are consistent with the real environment. This is
another challenge that this work takes upon, since consistent rendering in MR contributes to both
feeling of presence as well as realism of interaction in the case of virtual characters in MR scenes.
To the best knowledge of the authors there are no such systems appearing in the bibliography up to
date (Papagiannakis et al., 2005). In this paper, we propose simple and fast methods for three
198
important components (Interaction, Animation and Rendering), that elevate some of the issues
discussed above. Our approaches are specifically tailored to work as components of a mixed reality
real-time application. Our fully integrated system, includes speech recognition, speech synthesis,
interaction, emotion and personality simulation (Egges et al., 2004), real-time face and body
animation and synthesis, real-time camera tracking for AR and real-time virtual human PRT
rendering, and is able to run at acceptable speeds for real-time (20-30fps) on a normal PC. As a part
of the work presented in this paper, we will show the system running different scenarios and
interactions in MR applications. The following figure illustrates examples of our proposed work.

Figure 1. A mixed realities presence and interaction framework for enhanced Believability

References
Alice chat bot (2006). http://www.alicebot.org/.
Cassell, J., Vilhj´almsson, H., & Bickmore, T. (2001). BEAT: the Behavior Expression Animation Toolkit. Proceedings
of Special Interest Group on Graphics and Interactive Techniques (SIGGRAPH’01), pp 477–486.
Egges, A., Molet, T., & Magnenat-Thalmann, N. (2004). Personalised real-time idle motion synthesis. Pacific Graphics
2004, pp 121–130.
Hartmann, B., Mancini, M., & Pelachaud, C. (2002). Formational parameters and adaptive prototype instantiation for
mpeg-4 compliant gesture synthesis. Computer Animation 2002, pp 111–119.
Kopp, S., & Wachsmuth, I. (2004). Synthesizing multimodal utterances for conversational agents. Computer Animation
and Virtual Worlds, 15(1), 39-52.
Papagiannakis, G., Schertenleib, S., O’Kennedy, B., Poizat, M., Magnenat-Thalmann, N., Stoddart, A., & Thalmann, D.
(2005). Mixing Virtual and Real scenes in the site of ancient Pompeii. Journal of Computer Animation and
Virtual Worlds, 16(1), 11-24. Wiley Publishers.
Sloan, P. P., Kautz, J., & Snyder, J. (2002). Precomputed radiance transfer for real-time rendering in dynamic, low-
frequency lighting environments. Proceedings of ACM Special Interest Group on Graphics and Interactive
Techniques (ACM SIGGRAPH’02), pp 527–536. ACM Press, 2002.
3rd International Conference on Enactive Interfaces (Enactive /06) 199
Enactic applied to sea state simulation

Marc Parenthoën & Jacques Tisseau


Laboratoire d'Informatique des Systèmes Complexes (LISyC, EA 3883), CERV, France
parenthoen@enib.fr

Enactic main notions


For modelling complex systems, we propose to use the enaction concept as a principle to build
autonomized models of phenomena and to use the computer as a technological support to
experiment these models in a virtual reality system.
Phenomena are chosen by those who will use the virtual reality system, according to their
praxis in the real world. The modelled phenomena are tried out by the modeller in enaction through
the virtual reality system.
As we don't have any global model for complex systems and to keep an eye to autonomy and
interaction aspects of enaction, phenomena are modelled as autonomous entities and interactions
between models goes though a medium created and made to evolve by mere models activities. We
qualify such model by enactic, as a model is poorly enactive in itself. More precisely (Figure 1), an
enactic model is composed by a triplet (prediction, action, reorganisation) of active objects
(parameters, functions and activities) and an inner clock scheduling activities by chaotic
asynchronous iterations (Harrouet et al., 2002).

1. The prediction active object structures the medium


by putting beacons, localized in time and space and
asking for some properties, according to the
perceptive needs of the phenomenon model. We call
aisthesis these functions of active perception which
create interaction beacons.
2. The action active object on the one hand acts on the
medium thus created by the whole aisthesis to give it
experimented properties. We call praxis these
functions making the phenomenon model perceptible
by others. On the other hand, action gives autonomy
to the model by executing inner know-how.
3. The reorganisation active object informs the
phenomenon about experimental results carried out in
the area explored by its aisthesis or creates a new
instance of another enactic model. We call poiesis
these functions for effective perception and creation.
Figure 1. The enactic model

The action active object could have complex inner know-how, which can contain itself inner
simulation of enactic models. In that case, we talk about second order enactic model. The formal
enactic model is fully described in Parenthoën (2004).
To develop an enactic entities oriented language will ensure effective ergonomics for the
implementation of these models and will make it possible to perennialize the know-how acquired to
model complex systems and to experiment these models though virtual reality. To date, more than
hundred thousand lines of code in C++ have been written in our laboratory, since the last 10 years,
making easier the implementation of enactic models in a virtual reality system thanks to the library
ARéVi (Harrouet et al., 2006). The ergonomics of this language has to be improved for no
computer science specialists.

An enactic model for sea states


First, sailors and oceanographers both agree about the importance of wave groups, breakings,
winds, streams and shallow waters phenomena and their mutual interactions for the study of sea
200
states. Instead of solving numerically Navier-Stokes equations (complexity limitations) and instead
of simulating spectral distributions of quasi-linear waves (cited phenomena are ignored), we use
oceanographic results both from theories and experiments for modelling sea state phenomena as
enactic entities (Cani et al. 2006). The resulting model is named IPAS (Interactive
Phenomenological Animation of the Sea) and includes enactive entities such as wave groups, active
and passive breakings, local winds, shallow waters and currents, which may constitute the
affordances of sailors and oceanographers in the case of sea states reading.
Figure 2 shows the surface of the sea where wave groups (swell + windsea generated by distant and
local wind), breakings (generated by groups) and winds (synoptic wind is given by the operator,
venturi and wavy winds are created by the synoptic) are interacting at multiple scales (from the
meter to the kilometre). Detailed descriptions and physical assumptions for each entity behaviour
are given for physical oceanography community in Parenthoën et al. (2004).

Enactic models in IPAS are declined into oceanographic


models for wave groups, breakings and winds, and descriptive
models for currents and bathymetry. All of them give properties
to beacons when applying their praxis. These beacons to be
experimented are generated where the aisthesis of entities give
some medium for interactions through the resulting poiesis and
inner know-how. Some of the whole interactions are modelled
in IPAS : action towards wave groups from others, breakings,
winds, bathymetry and currents, and action toward breakings
from wave groups, winds and currents and indirectly from
bathymetry through wave groups evolving. Interactions are
computed in term of action or energy transfers, wave parameters, breaking activity, transport, refraction and
creation. These enactive entities assume the physical believability of the virtual environment: action balance,
wind stress, refraction and transport.

Figure 2. Screen shot from a virtual sea state simulated by the enactic model IPAS

To conclude, enactic is a new constructive method for modelling complex systems involving
multi-models and multi-scales interactions. Enactic might contain the premises of a new
methodology for the study and the comprehension of complex systems. Enactic was applied to give
rise to the sea state model IPAS, within the framework of useful simulations for sailors and
oceanographers. If it remains to prove physical validity of models generated according to this
method, one should also study how these sort of participative simulation make easier the transfer
from enaction to virtual experiments of artificial enaction.

References
Cani, M., Neyret, F., Parenthoën, M., & Tisseau, J. (2006). Modèles pour les environnements naturels. In Fuchs, P.,
Moreau, G., and Tisseau, J. (Eds), Le traité de la Réalitée Virtuelle, 3d edition, 3, 315-332. Les Presses de l'Ecole
des Mines, Paris.
Harrouet, F., Cazeaux, E., & Jourdan, T. (2006). Arevi. In Fuchs, P., Moreau, G., and Tisseau, J. (Eds), Le traité de la
Réalitée Virtuelle, 3d edition, 3, 369392. Les Presses de l'Ecole des Mines, Paris.
Harrouet, F., Tisseau, J., Reignier, P., & Chevailler, P. (2002). oRis : un environnement de simulation interactive multi-
agents. Technique et Science Informatiques (RSTI-TSI), 21(4), 499524.
Parenthoën, M. (2004). Animation phénoménologique de la mer -- une approche énactive. PhD thesis, Laboratoire
d'Informatique des Systèmes Complexes (LISyC, EA3883), Université de Brest, France.
Parenthoën, M., Jourdan, T., & Tisseau, J. (2004). IPAS : Interactive Phenomenological Animation of the Sea. In
Chung, J. S., Prevosto, M., and Choi, H. S., editors, International Offshore and Polar Engineering Conference
(ISOPE), volume 3, pages 125132, Toulon, France.
3rd International Conference on Enactive Interfaces (Enactive /06) 201
The role of expectations in the believability of mediated interactions

Elena Pasquinelli
Institut Nicod - EHESS, France
Elena.Pasquinelli@ehess.fr

The notion of believability in mediated conditions can be characterized as a judgment


regarding the plausibility of a certain mediated experience, the judgment being positive when the
experience respects the expectations of the subject which are activated by the contents and context
of the experience itself. When we consider a certain experience as believable in fact we do not
necessarily consider the experience as being true, in the sense of being an experience with real,
existing objects. Neither we consider that experience as being susceptible of becoming true, for
instance in the future.
Since no problem of existence is at stake, then the adherence of the experience with the
experienced reality cannot be a criterion for believability: when we consider a certain experience as
believable we just accept it as plausible under certain conditions. Since the subject cannot compare
his experience with reality, then he might compare his experience with his expectations.
Expectations are in fact always present when we have an experience at the cognitive, perceptual or
motor level. The fact that we normally hold a certain number of expectations is testified by the fact
that we react with surprise when faced with certain, unexpected events. Surprise is in fact an effect
of unfulfilled expectations (Casati & Pasquinelli, Submitted; Davidson, 2004; Dennett, 2001).

Different types of expectations


In virtue of the role played by expectations in believability, it seems to be important for VR
designers to identify the expectations held by the users. In considering the conditions that are
relevant for the judgment of believability, we must then take into account the existence of different
kinds of knowledge ad relative expectations.
In certain conditions, for instance, the VR experience involves scientific knowledge, as it can
be the case for training and simulation for medical applications. Another type of knowledge which
certainly seems to be involved in most applications is the so-called commonsense knowledge. Two
types of commonsense knowledge can be described: Commonsense knowledge of the type of naïve,
qualitative or folk physics makes reference to the aspect of the world as most of the people think
about it, rather than to the world as physicists think about it. This form of commonsense knowledge
is expressed by beliefs (eventually by theories) and generates explicit expectations, as it is the case
for scientific knowledge; contrarily to scientific knowledge it is not necessarily correct or justified.
The second type of commonsense knowledge is a very general form of knowledge generating a
wide set of expectations. These expectations do not make reference to some form of belief or theory
but are based on the existence of connections between perceptual experiences or between motor
actions and perceptual experiences. Selection and learning from experience are at the origin of this
kind of knowledge and relative expectations (see Stein & Meredith, 1993). As expressed by the
label “commonsense” the previous type of knowledge is largely shared by human beings. But any
human being can acquire new knowledge through local experiences, study, and practice. Local
knowledge depends on a more local context than commonsense knowledge and it is not necessarily
widely shared. Additionally, three types of knowledge can be distinguished, according to the type of
learning through which they are acquired: symbolic, iconic and enactive knowledge. Time for the
acquisition of new knowledge varies with the type of activity but it is plausible that the acquisition
of new perceptual and motor connections and the eventual modification of early acquired and
shared connections will require more time for taking place, if even it can possibly take place. It
seems possible, in any case, that suitable training plays an effect in producing new connections as
those described in the second form of commonsense knowledge, thank to the existence of neural
plasticity (see Bach-y-Rita, 1982; Benedetti, 1991). Finally, the interaction with the virtual world
can be the occasion for the user to acquire new symbolic, iconic and enactive knowledge, and,
202
through suitable training, to develop new perceptual and motor connections, hence to give rise to a
large set of new expectations. It is important to understand which are the most suitable instruments
for producing new learning acquisition at different levels when this is desirable and how to exploit
the different acquisitions, for instance in order to supply to VR systems limitations.

Activated and deactivated expectations in VR


The case of the experience with fictional, virtual and artificial worlds is a special one. As a
matter of fact only certain expectations are in cause in these kinds of mediated experiences. Other
expectations are necessarily deactivated. The judgment of believability, then, does not depend on
expectations in general but on some specific expectations that are activated by the context and the
contents of the experience. In the context of the interaction with virtual worlds mediated by enactive
interfaces expectations can be activated and deactivated at three levels: narrative, perceptual and
motor-perceptual or interactive. Implicit knowledge and related implicit expectations based on the
possibilities of action, on the rules of perception and on motor-perceptual connections are then
particularly relevant for the experience with enactive interfaces.
The deactivation of expectations is connected to the notion of suspension of disbelief. Two
different questions arise that have a theoretical interest for the understanding of cognition and a
pragmatic interest for VR designers: Which are the factors that influence the suspension of
disbelief, that is, the deactivation of certain expectations? Which expectations can be suspended? It
is plausible that certain expectations will be more reluctant to de-activation than others. It is
plausible that commonsense knowledge will produce expectations that are more stable than those
produced by local acquisitions and that perceptual and motor connection will give rise to
expectations that are more stable than expectations based on certain beliefs.
The other aspect of the triggering of believability consists in the respect of the expectations
that are activated. This aspect presents two conditions: the possibility of activating certain
expectations and the coherence between the experience proposed and the expectations that are
activated. As for the suspension of disbelief, the first question entails the theoretical and pragmatic
problem of which expectations can be activated and how. The context and content of the experience
seem to play an important role both in the activation and de-activation of expectations. A study on
volatile expectations (that is, on expectations that are activated in special conditions but do not need
to be present all the time in the mind of the subject) in non-mediated experiences suggests that the
subject’s intention to act might be another activating factor for expectations because the intention to
act retrieves expectations that are relevant for the task (Casati & Pasquinelli, Forthcoming).

References
Benedetti, F. (1991). Perceptual learning following a long-lasting tactile reversal. Journal of Experimental Psychology:
Human Perception & Performance, 17(1), 267-277.
Casati, R., & Pasquinelli, E. (Forthcoming). How can you be surprised? The case for volatile expectations. In A. Noe
(Ed.), Hetherophenomenology and Phenomenology, special issue of Phenomenology and the Cognitive Sciences.
Davidson, D. (2004). Problems of rationality. Oxford: Clarendon Press.
Dennett, D. C. (2001). “Surprise, surprise," commentary on O'Regan and Noe. Behavioural and Brain Sciences, 24(5),
982.
Stein, B. E., & Meredith, M. E. (1993). The merging of the senses. Cambridge, Mass.: MIT Press.
3rd International Conference on Enactive Interfaces (Enactive /06) 203
Exploring full-body interaction with environment awareness

Manuel Peinado1, Ronan Boulic2, Damien Maupu2, Daniel Meziat1 & Daniel Thalmann2
1
Departamento de Automática, University of Alcalá, Spain
2
VRLab, Ecole Polythechnique Fédérale de Lausanne, Switzerland
manupg@aut.uah.es

Interactive control of an avatar through full body movement has a wide range of applications.
We present a system that allows such control while avoiding collisions between the virtual character
and its environment. Motion is transferred to the avatar thanks to a set of kinematic constraints
satisfied in real-time by a Prioritized Inverse Kinematics solver. At the same time, a set of special
entities called “observers” check the motion of relevant body parts, damping their progression
towards near obstacles in order to avoid future collisions. We have tested this system within the
context of a virtual reality application in which the user performs manipulation tasks in virtual
environments populated by static and dynamic obstacles.

Method overview
We present a method for the interactive control of an avatar in a virtual environment
populated with obstacles. This method is based on a state-of-the-art Prioritized IK solver, which is
able to satisfy multiple constraints according to a strict priority ordering (Baerlocher & Boulic,
2004). A motion reconstruction module has the responsibility of tracking the position of the input
sensors. Since these sensors change on each frame, the IK solver must be run on each frame to find
an updated pose that matches the new position of the sensors. This module is unaware of the
environment and may thus produce poses that penetrate surrounding obstacles. The collision
avoidance module takes care of this by keeping a list of observers that monitor the output of the IK
solver, activating a set of preventive constraints when necessary.

Inverse Kinematics for motion reconstruction


On each new motion frame, the input sensors provide information on the position of the
performer’s wrists, ankles and head. We use the Prioritized IK (PIK) solver to compute the
configuration of the joints that are not directly specified by the input sensors. In order to make a
good use of the prioritization capabilities of the PIK solver, it is important to identify which
constraints have a bigger impact on the reconstructed postures, and to assign them higher priorities.
Our experience has led us to assign the highest priority to the center of mass position constraint: this
constraint maintains balance by ensuring that the center of mass projects over the support polygon.
The next most prioritary constraint is tracking the ankle sensors. Attaining a good tracking of the
ankles is important for the believability of reconstructed motions, especially during ground frames.
At the end of the priority ranking are the wrist and head tracking constraints, because errors in their
satisfaction are less noticeable. The effectiveness of this constraint hierarchy has been demonstrated
by the reconstruction of full-body motions such as crouching or jumping (Peinado et al., 2006).

Collision avoidance with observers


The collision avoidance layer relies on the key notion of observer. An observer is an entity
that is attached to each body part that must be checked for collisions. The interaction of an observer
with its environment depends on its shape, which is user-specified. On each iteration of the IK loop,
an observer checks the position and velocity of its associate body part with respect to surrounding
obstacles. When it detects that its body part is moving towards a near obstacle (or vice versa), it
activates a preventive constraint that damps the motion of the body part in the obstacle’s normal
direction. The use of relative velocities for this damping has interesting implications. Think of a
scenario where an obstacle moves towards a still observer. Thanks to the relative velocity
formulation, this is treated as if the observer is moving, so a constraint is activated and the observer
moves to avoid the obstacle (Figure 1b).
204
Experimental setup
We have used a magnetic motion capture device to test our system. The user was equipped
with 5 sensors located at the head, wrists and ankles. His motion was transferred in real-time to the
avatar, and corrected by the collision avoidance module to avoid near obstacles. This is shown in
Figure 1.a. The user was asked to kneel down and wipe the floor below the obstacle. It can be seen
how the avatar motion preserves the features of the user actions (e.g., move the arm from left to
right), while avoiding collisions with the obstacle.

First evaluation and perspectives


One weakness of this interaction paradigm is that the user has to keep eye contact with the
screen in order to see where obstacles in the virtual environment are. We plan to improve this by
trying alternative ways of interaction, such as a head mounted display. Another possibility would be
to provide the user with a hand-held controller that enables him to change the point of view in the
large screen, or even to switch to first-person view at will. Regarding the collision avoidance
capability, our algorithm generates a kind of “proactive” repulsion that is coherent with the relative
observer-obstacle velocity. The usefulness of this can be seen in the experiment of Figure 1.b. The
user was told to stay still with the arms stretched forward. The avatar bends his left arm before any
collision occurs thanks to the smooth collision zone surrounding the obstacle (in wireframe). On the
other hand, the right arm collides as it has no observers attached to it. We find such a preventive
behaviour to be more realistic than a pure reaction to effective collisions. It will be the
technological background of future experiments focusing on the usability of such interface.

(a)

(b)

Figure 1. (a) Manipulation under a low obstacle. (b) Dodging a moving obstacle.

References
Baerlocher, P., & Boulic, R. (2004). An Inverse Kinematics Architecture Enforcing an Arbitrary Number of Strict
Priority Levels. The Visual Computer, 20(6), 402-417.
Peinado, M., Boulic, R., Meziat, D., & Raunhardt, D. (2006). Environment-Aware Postural Control of Virtual Humans
for Real-Time Applications. Proceedings of the SAE Conference on Digital Human Modeling for Design and
Engineering, July 2006, ENS à Gerland, Lyon, France.
3rd International Conference on Enactive Interfaces (Enactive /06) 205
Mimicking from perception and interpretation

Catherine Pelachaud1, Elizabetta Bevacqua1, George Caridakis2, Kostas Karpouzis2,


Maurizio Mancini1, Christopher Peters,1 & Amaryllis Raouzaiou2
1
University of Paris 8
2
National Technical University of Athens
c.pelachaud@iut.univ-paris8.fr

The ability for an agent to provide feedback to a user is an important means for signalling to
the world that they are animated, engaged and interested. Feedback influences the plausibility of an
agent’s behaviour with respect to a human viewer and enhances the communicative experience.
During conversation addressees show their interest, their understanding, agreeing and attitudes
through feedback signals. They also indicate their level of engagement. It is often said that speaker
and addressee dance with each other when being engaged in a conversation. This dancing together
is partly due to the mimicking of the speaker’s behavior by the addressee. In this paper we are
interested in addressing this issue: mimicking as a signal of engagement.
We have developed a scenario whereby an agent senses, interprets and copies a range of facial
and gesture expression from a person in the real-world. Input is obtained via a video camera and
processed initially using computer vision techniques. It is then processed further in a framework for
agent perception, planning and behaviour generation in order to perceive, interpret and copy a
number of gestures and facial expressions corresponding to those made by the human. By perceive,
we mean that the copied behaviour may not be an exact duplicate of the behaviour made by the
human and sensed by the agent, but may rather be based on some level of interpretation of the
behaviour (Martin et al., 2005). Thus, the copied behaviour may be altered and need not share all of
the characteristics of the original made by the human.

General framework
The present work takes place in the context of our general framework (Figure 1) that is
adaptable to a wide range of scenarios. The framework consists of a number of interconnected
modules. At the input stage, data may be obtained from either the real world, through visual
sensors, or from a virtual environment through a synthetic vision sensor.

Figure 1. The general framework that embeds the current scenario. Large arrows indicate the
direction of information flow, small arrows denote control signals, while arrows with dashed
lines denote information availability from modules associated with long term memory. Modules
with a white background are not applicable to the scenario described in this paper.

Visual input is processed by computer vision (Rapantzikos and Avrithis, 2005) or synthetic
vision techniques (Peters, 2005), as appropriate, and stored in a short-term sensory storage. Gesture
206
expressivity parameters, facial expressions are extracted from the input data (Bevacqua et al.,
2006). This acts as a temporary buffer and contains a large amount of raw data for short periods of
time. Elaboration of this data involves symbolic and semantic processing, high-level representation
and long-term planning processes. Moreover, it implies an interpretation of the viewed expression
(e.g. value of Facial Animation Parameters (FAPs) extracted Æ anger), which may be modulated by
the agent (e.g. display an angrier expression) and generated in a way that is unique to the agent
(anger Æ another set of FAPs or of FAPs values). The generation module (Hartmann et al., 2005)
synthesises the final desired agent behaviours.

Application scenario
Currently our system is able to extract data from the real world, process it and generate the
animation of a virtual agent. Either synthesized gesture or facial expression are modulated by the
gesture expressivity parameters extracted from the actor’s performance. The input coming from the
real world is a predefined action performed by an actor. The action consists of a gesture
accompanied by a facial expression. Both, the description of the gesture and of the facial expression
are explicitly requested to the actor and previously described to him in natural language (for
example the actor is asked “to wave his right hand in front of the camera while showing a happy
face”). The Perception module analyses the resulting video extracting the expressivity parameters
of the gesture and the displacement of facial parts that is used to derive the FAPs values
corresponding to the expression performed. The FAPs values and the Expressivity parameters are
sent to the Interpretation module. If the facial expression corresponds to one of the prototypical
facial expression of emotions, this module is able to derive its symbolic name (emotion label) from
the FAPs values received in input; if not the FAPs values are used. Instead, the symbolic name of
the gesture is sent manually because the Interpretation module is not able to extract the gesture
shape from the data yet. Finally, how the gesture and the facial expression will be displayed by the
virtual agent is decided by the Planning module that could compute a modulation either of the
expressivity parameters or of the emotion. Then the animation is calculated through the Face and
the Gesture Engine and displayed by the virtual agent.

References
Bevacqua, E., Raouzaiou, A., Peters, C., Caridakis, G., Karpouzis, K., Pelachaud, C., & Mancini, M. (2006).
Multimodal sensing, interpretation and copying of movements by a virtual agent. Perception and Interactive
Technologies, Kloster Irsee, June 2006.
Hartmann, B., Mancini, M., & Pelachaud, C. (2005). Implementing expressive gesture synthesis for embodied
conversational agents. Gesture Workshop, Vannes.
Martin, J.-C., Abrilian, S., Devillers, L., Lamolle, M., Mancini, M., & Pelachaud, C. (2005). Levels of representation in
the annotation of emotion for the specification of expressivity in ecas. International Working Conference on
Intelligent Virtual Agents, pp 405–417, Kos, Greece.
Peters, C. (2005). Direction of attention perception for conversation initiation in virtual environments. International
Working Conference on Intelligent Virtual Agents, pp 215–228, Kos, Greece.
Rapantzikos, K., & Avrithis, Y. (2005). An enhanced spatiotemporal visual attention model for sports video analysis.
International Workshop on content-based Multimedia indexing (CBMI), Riga, Latvia.
3rd International Conference on Enactive Interfaces (Enactive /06) 207
Posting real balls through virtual holes

Gert-Jan Pepping
University Center for Sport, Exercise, and Health,
Center for Movement Sciences
Center for Sports Medicine
University Medical Center Groningen, University of Groningen, The Netherlands
g.j.pepping@sport.umcg.nl

In the present paper it was investigated whether participants perceive affordances and judge
action boundaries accurately when negotiating real objects with reference to a virtual environment.
Gibson (1977, 1986) introduced affordances as the basis for the ongoing guidance of movement.
Affordances are generally defined in terms of a discrete task and scientific investigations have
focused on the perception of critical and preferred action boundaries. For instance, Warren (1984)
asked actors to indicate at which height stair risers of different heights were perceived as climbable.
A general finding is that at the action boundary actors are less certain about which action is
afforded, which is expressed in decreased perceptual accuracy and increased reaction times around
the action boundary. Another common finding is an effect of response type on reaction time when
judging action boundaries. Pepping and Li (2005) for instance, found pronounced effects of the type
of response on reaction time in the task of overhead reaching. The aim of the present study was to
explore the effects of typical virtual environment actions, viz. pressing keyboard buttons and
moving a mouse, on reaction times and perceptual accuracy when posting real balls through virtual
holes.

Method
Fifteen participants were asked to judge whether a small ball (diameter 30mm) would fit
through a disk-shaped virtual hole presented on a computer screen. The diameter of the hole varied
in size, ranging from 27mm smaller than the ball, to 27mm larger than the ball, in three millimeter
steps. Judgments were made in two response task conditions; i) by moving a mouse toward or away
from the virtual hole (N=7) and ii) by pressing yes or no keyboard buttons (N=8). Each hole-size
was presented 20 times, in a randomized order. Before the experimental session participants had
extensively practiced the experimental task. Reaction time was recorded in ms as the time between
the presentation of a virtual hole on the computer screen and the initiation of a mouse movement or
keyboard button press. Participants received no feedback on the accuracy of their response. The
judged action boundary was estimated by fitting the % positive responses (i.e. forward mouse
movements, yes keyboard button) for different hole-sizes to a psychometric function (100/(1+e-k(c-
x)), where x denotes the % positive response, c the judged action boundary - the point where 50%
of the balls were judged as being able to fit the hole - and k the slope at this point.

Results and Discussion


Results showed an effect of response mode on c (t(13) = 2.71, p < .05), see Table 1. The
mouse group judged the real ball to fit through virtual holes that were on average 1.6 mm larger
than the diameter of the ball. The keyboard group was more conservative in their judgments. They
judged balls to fit through virtual holes that were on average 4.7 mm larger than the diameter of the
ball. There was no effect of response mode on k.
Table 1. Mean ± SD of the judged action boundary in mm in the two response mode conditions.
Note that the real ball diameter was 30 mm.

group judged action boundary c


mouse 31.6 ± 3
keyboard 34.7 ± 4.9
208
Reaction time analysis revealed a main effect of size on reaction time (F(18,234) = 9.48, p <
.05, partial η2 = .42) and an interaction effect of group * hole-size on reaction time (F(18,234) =
2.29, p < .05, partial η2 = .15). In both response conditions, shorter reaction times were associated
with holes that would either easily fit or not fit the ball, and longer reaction times with hole-sizes
that were near the actual size of the ball (see Figure 1). However, for holes where the ball would not
fit shorter reaction times were found in the keyboard condition, whereas for holes where the ball
would fit shorter reaction times were elicited in the mouse condition.

750 group
mouse
700 keyboard
reaction time (ms)

650

600

550

500

450

0.1 0.3 0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9
0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8
virtual hole size/ball size

Figure 1. Interaction effect of ball-hole-fit and response mode group on reaction time (ms).

Conclusion
The experiment showed that humans are quite accurate at determining action boundaries of
real objects in virtual environments. Judged action boundary and the time participants took to
assemble, tune and launch a response device for posting balls by means of a computer task was
affected by hole-size and type of response. Around the action boundary longer reaction times were
recorded. Positive and negative keyboard pressing elicited similar reaction times, whereas forward
mouse movements took less time to assemble, tune and launch in comparison to backward mouse
movements. Insights into these effects are relevant for the design of user interfaces. As this study
shows, the action capabilities the user is permitted affect time and accuracy demands of a computer
task. Great care should be taken in choosing the task through which users interact with the virtual
environment.

References
Gibson, J. J. (1977). The theory of affordances. In R. Shaw & J. Bransford (Eds.), Perceiving, acting and knowing (pp.
67-82). Mahwah, NJ: Lawrence Erlbaum Associates.
Gibson, J. J. (1986). The ecological approach to visual perception. Mahwah: NJ: Lawrence Erlbaum Associates, Inc.
Pepping, G.-J., & Li, F.-X. (2005). Effects of response mode on reaction time in the detection of affordances for
overhead reaching. Motor Control, 9, 129-143.
Warren, W. H. (1984). Perceiving affordances: Visual guidance of stair climbing. Journal of Experimental Psychology:
Human Perception and Performance, 10(5), 683-703.
3rd International Conference on Enactive Interfaces (Enactive /06) 209
Towards space concept integration in navigation tools

Edwidge. E. Pissaloux, Flavien Maingreaud, Eléanor Fontaine & Ramiro Velazquez


Laboratoire de Robotique de Paris, CNRS/FRE 2507 University of Paris 6, France
pissaloux@robot.jussieu.fr

This paper introduces a new space external representation (cognitive walking map, CWM),
and its tactile coding, for man-space interactions. The proposed CWM, displayed in real-time on
LRP’s tactile (Braille) oriented surface (TouchPad, Velazquez, 2006), has been experimentally
evaluated with 10 healthy blindfolded subjects during the execution of some elementary task
underlying the navigating in real space. Collected results show that the proposed space
representation can be integrated into a portable navigation tool which could assist visually impaired
or blind people in their 3D displacements.

Introduction.
Human interaction with space is based on a space mental representation, a space cognitive
map (Tolman, 1948), built upon biosensory and proprioceptive data. This map is the origin of all
human actions. In the case of seniors or handicapped subjects (blinds, spatial neglects,…), the
biosensory data are partial or incorrect, implying the incorrect/incomplete cognitive map and
erroneous brain answers to stimuli. In such situation a dedicated tool, providing the missing data,
might assist the brain in establishing the correct cognitive map.
The development of the virtual reality concept has led to many research on man-space
interaction via reaching tasks, and to the design of haptics tools (EuroHaptics 2003, Eurohaptics
2006) ; however, very few works have been dedicated to man-space interaction during the walking
and navigation (Guariglia at al. , 2005) with tactile feefback, and there is almost no tools for space
perception (Auvray, 2004, Lenay, 2006). This paper proposes a new approach for man-space
interaction assistance via subject nearest space geometry and topology convenient for tactile display
and which might assist man in his displacements execution in the absence or a very limited access
to visual data.
The rest of the paper is organized as follows. Section 2 addresses 3D scene cognitive walking
map concept (CWM); Section 3 reports its experimental validation of the proposed CWM model,
while Section 4 analyses the collected results and concludes the paper with future research
directions.

Model of the cognitive map for walking (CWM)


At every instant the walking task induces the subject nearest space partition (Figure 1, left and
right pictures) into two subspaces (a dual representation): a subspace defined by obstacles and an
obstacle-free subspace. At the given instant t, the “frontiers” of these two subspaces are defined by
subject’s viewed (egocentered) nearest obstacles borders (Figure 1, central picture). These borders
vary dynamically with subject’s observation point and with the time; they define a cognitive
walking map (CWM). Therefore, at the given instant t, a CWM is defined as a triple :
{space_partition, reference frame, metrics}.
Any walking assistance should materialize these three attributes of a space representation.

a) b) c)
Figure 1. Space perception a), CWM b) and virtual partition of the space when walking c)
210
Experimental evaluation of the proposed CWM
At least four elementary tasks underlie the walking: (1) obstacle awareness, (2) estimation of a
(ego centered) distance to obstacles, (3) estimation of a (allo centered) distance between obstacles
(affordance), and (4) space integration. The effectiveness of the proposed CWM and its tactile
(binary) representation displayed on the TouchPad have been evaluated during the walking
experiments of (ten) blindfolded healthy and naïve subjects walking in real space.
Obstacle awareness and estimation of a (ego centered) distance to obstacles have been
measured via (1) the number of bumps registered during 10 minutes navigation in a real space and
(2) the size of effectively explored real surface (estimated from space perceived via TouchPad,
Figure 2a). The collected results show that in the average 5 bumps have been recorded and that of
the 94% real surface have been explored.
During the estimation of a (allo centered) distance between obstacles (affordance) subject had
to evaluate the presence or absence of a hole in the (virtual) wall in front of him and to decide if he
can pass through it (Figure 2b). For holes of three different sizes and three different locations, the
obtained main results show that 62% of centrally located, 48% left located and 72% right located
holes were correctly localized and sized. Learning has increased the holes’ recognition rate by
almost 50% from the initial value.
The space integration task consists of reconstruction of a whole (global) space from its partial
(local) “views”. Therefore, the space integration (apartment layout integration for example, Figure 2
c left) consists of understanding, chaining and memorization of (empty) rooms’ sequence (a
sequence of its tactile representations, Figure 2c right) in order to recognize the whole topology of
the apartment (visual matching). After the tactile exploration of a room geometry, subject should
draw it.

a) b) c)
Figure 2. Experimental evaluation of the proposed CWM displayed on the TouchPad: obstacle
awareness and (ego centered) distance to obstacle estimation a), (allo centered) distance
between obstacles estimation b), space integration c).

Three parameters have been quantified : (1) quality of drawing (displayed and drawn segment
lines matching), (2) quality of the space integration (correct matching between flat explored during
the virtual navigation and finally seen flat), (3) exploration time per room. 60% of subjects have
selected the correct environment (using their drawings) ; the tactile exploration and memorization
times decrease with experiment duration, and their average is around 1.6min/tactile display.

Comments and future works


From collected statistical results it is possible to conclude that subjects had a good awareness
of obstacles presence. Ego-centered distances estimation is less successful as it depends on surface
resolution and surface/space sampling. This latter can be improved via data to be gathered from new
experiences and integrated in algorithms used for tactile representation display on the TouchPad.
Allocentered distances, useful for distance between obstacles estimation, are well estimated. Global
space integration from local space “tactile” views, a complex cognitive task, has been successfully
performed by 60% of subjects. Therefore, it is possible to claim that the proposed space
representation by CWM and its binary code can be integrated into a portable navigation tool which
could assist during the walking blindfolded healthy subjects.
3rd International Conference on Enactive Interfaces (Enactive /06) 211
Future works should include experiments (a) in dynamic situations (real navigation) with
blindfolded healthy subjects in real scene, (b) in both, dynamic and static, situations with low vision
and blind people should be implemented. Experiments leading to tactile flow concept establishing
should be defined and implemented. All these could contribute to the design of a cognitive travel
assistance (CTA).

References
Auvray, M. (2004). Immersion et perception spatiale: l'exemple des dispositifs de substitution sensorielle. Thèse,
EHESS.
EuroHaptics (2003). Dublin, Ireland ; Eurohaptics 2006, Paris, France.
Guariglia C., Piccardi L., Iaria G., Nico D., & Pizzamiglio L. (2005). Representational neglect and navigation in real
space. Neuropsychologia, 43, 1138-1149.
Lenay, Ch. (2006). Réticences et conditions d’adoption des prothèses perceptives. Handicap06, Paris, juin 2006.
Tolman, E. C. (1948). Cognitive Maps in Rats and Men. Psychological Review, 55(4), 189-208.
Vélazquez, R., Hafez, M., Pissaloux, E., & Szewczyk, J. (2006). A Computational-Experimental Thermomechanical
Study of Shape Memory Alloy Microcoils and its Application to the Design of Actuators. International Journal
of Computational and Theoretical Nanoscience, 3(4), 1-13, Special Issue on Modelling of Coupled and Transport
Phenomena in Nanotechnology, ISSN: 1546-1955, American Scientific Publishers

Acknowledgements
This research has been partially supported by the CNRS program ROBEA (HuPer Project).
We thank professor Alain Berthoz from the Collège de France for his scientific discussions on the
subject.
212
3rd International Conference on Enactive Interfaces (Enactive /06) 213
Synchronisation in anticipative sensory-motor schemes

Jean-Charles Quinton, Christophe Duverger & Jean-Christophe Buisson


ENSEEIHT - IRIT, France
buisson@enseeiht.fr

This paper describes a model of anticipative sensory-motor schemes, inspired by Jean Piaget
and Interactivist theories. The necessity of interactivity in order to provide a real epistemic contact
to reality is discussed. We then describe a computer implementation of such schemes, applied to the
recognition of musical rhythms, which illustrates these ideas in a more concrete way.

Assimilation schemes
Assimilation schemes have been introduced by Swiss psychologist Jean Piaget, as dynamical
structures assimilating the agent's environment (Piaget, 1952). In Piaget's theory, schemes are
operating at all levels of an agent's life, from the biological levels where, for example, cells
assimilate the nutrients around them in order to preserve their own structure, to the most abstract
levels of cognition, where schemes act as theories able to assimilate facts and data.
When assimilating a situation, a scheme also accommodates to it, in order to preserve its
assimilation activity. Such accommodation may lead to the creation of newly derived assimilating
schemes, a framework for learning.
Assimilation schemes are enactive processes, as being inherently sensory-motor and
autopoietic. In their simplest form, they are cycles of action/anticipation elements; the action part of
such elements are used to operate on the situation, but also to test whether the situation is as it
seems to be, or not. In that sense, Piaget's schemes are a particular case of the interactivist processes
introduced by Bickhard (1995, 1999). Interactivist processes are using action/anticipation pathways
to interact with the situation, in a way similar to Piaget schemes. The fundamental interactivist
knowledge is not composed of explicit encodings; on the contrary, it is with its implicit properties
that a situation will be categorized and interacted with in an adapted way. Bickhard has shown
(Bickhard, 1995) that this approach dissolves the 'symbol grounding problem', and that all non-
interactive frameworks are doomed to failure as regards to this problem : without interactive
feedback, a representation has no real epistemic contact to reality. To take an example again, the
perception of a chair is not the passive recognition of a set of structural features; instead it is the
possibility of sitting on it, related to several complex sensory-motor schemes used in sitting
activities. In that sense, it is similar to a Gibsonian perception (Gibson, 1979), where the chair
'affords ' the sitting process.
Piaget/interactivist schemes are inherently temporal, since they are composed of sensory-
motor elements which are synchronized on temporally extended processes of the environment. For
example when driving, the effects of a steering-wheel move can only be evaluated as time goes on.
Another important aspect of Piaget/interactivist schemes is that the degree to which they assimilate
a situation provides an internal error criterion for the agent itself, a mandatory ingredient for
unsupervised learning.

Music assimilation schemes


In order to go from this general description to a detailed and operational model, we have been
considering schemes recognizing musical rhythms. A previous attempt has been made before
(Buisson, 2004), which validated the feasibility of the method. Edward Large (2002) is working on
finding the main tempo in music with a connectionist but similar approach.
In our system, a human user strikes a single key on the computer keyboard, and the program
is able to recognize the music, by putting in action the sensory-motor scheme associated to it, and
by synchronizing it on key strokes, in period as well as in phase. What we call a rhythm is a
sequence of strokes separated by variable duration delays which are all defined as a fraction of the
sequence period. For example (0.25, 0.25, 0.5) could represent two quarter notes followed by a half
214
note (as in the score displayed on the following figure).
At the beginning, the program is only endowed with a metronome rhythm (the simplest
possible rhythm); the accommodation properties of schemes lead to the creation of new schemes
when new rhythms are only partially assimilated by current schemes. The program can be
progressively trained to recognize more and more complex rhythms, in a completely unsupervised
way. As time goes on, the program's musical culture enlarges, composed of a growing store of
sensory-motor musical schemes.
From the initial version of the program, we have kept the general organization and goals, but
we have completely redesigned its synchronization mechanism, which was quite slow to make a
scheme lock in phase and period to external key strokes. The new synchronization mechanism is
able to lock a scheme activity on a compatible rhythm of key strokes in the shortest possible time,
always shorter than the rhythm cycle. As an example, we will consider a scheme assimilating a
rhythm composed of 3 notes. When this scheme tries to assimilate a compatible rhythm performed
on the keyboard, it is running as a computer thread (a lightweight process), itself composed of 3
sub-threads:

1 3

2
2
1 2 3 1 3
1 3

At the beginning, each group has a different active node (coloured). An active node represents
a future anticipated key stroke, and the initial presence of the three groups represents the complete
uncertainty as to the position of the first key stroke on the rhythm score. Each group tries to
synchronize on the external key strokes in real time, and after a few strokes, only one group will
survive and correctly anticipate the rhythm.
In each group, activity goes from node to node following the arrows. For a node to be active
means that a key stroke is supposed to occur at an anticipated time. A transfer function models the
confidence of the node for this anticipated event, as well as the uncertainty of the expected time.
When the function eventually reaches a threshold, activity goes to the next node, and a new
anticipated time and a new confidence are computed. With a high confidence, a node might
completely ignore some events (noise immunity), or hear an event when there is none (error
completion).

We have presented in this paper a new algorithm, capable of synchronizing a sensory-motor


scheme on external sensory events (drum beats). Such synchronizing processes have been
discovered in many different brain areas, and appear to play an important part in coordinating brain
activities.

References
Bickhard, M. H. (1995). Foundational issues in artificial intelligence and cognitive science. Elsevier Scientific,
Amsterdam.
Bickhard, M. H. (1999). Interaction and representation. Theory & Psychology, 9(4), 435-458.
Buisson, J.-C. (2004). A rhythm recognition computer program to advocate interactivist perception. Cognitive Science,
28(1), 75-87.
Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin.
Large, E. W., & Palmer, C. (2002). Perceiving temporal regularity in music. Cognitive Science, 26, 1-37.
Piaget, J. (1952). The origins of intelligence in children. London: Routledge and Kegan Paul.
3rd International Conference on Enactive Interfaces (Enactive /06) 215
Motor and parietal cortical areas both underlie kinaesthesia

Patricia Romaiguère1, Jean-Luc Anton2, Laurence Casini1 & Jean-Pierre Roll1


1
LNH, UMR 4169, CNRS-University of Provence, France
2
Centre d'IRM fonctionnelle, France
Patricia.Romaiguere@up.univ-mrs.fr

Kinaesthetic sensations usually result from movements, whether voluntarily executed or


passively imposed. In studies of kinaesthesia, it is therefore difficult to discriminate which
components pertain to motor execution and which pertain to sensory feedback. It is thus of interest
to find situations in which subjects experience sensations of movement while they are not actually
moving. Such kinaesthetic illusions can be elicited by artificially manipulating the proprioceptive
channel through tendon vibration. Tendon vibration has long been known to evoke perception of
illusory movements through activation of muscle spindle primary endings. Few studies, however,
have dealt with the cortical processes resulting in these kinaesthetic illusions. We conceived a fMRI
and MEG experiment to investigate the cortical structures taking part in these illusory perceptions.

Methods
Since muscle spindle afferents project onto different cortical areas involved in motor control it
was necessary to discriminate between activation related to sensory processes and activation related
to perceptual processes. We therefore compared conditions in which similar stimulation of muscle
spindles induced or not movement illusions, i.e. conditions in which similar sensory inputs resulted
in different perceptual outputs. This was achieved through co-vibration of the tendons of wrist
extensor and flexor muscle groups. Indeed, our previous work showed that co-vibration of the
tendons of antagonist muscles at the same frequency did not induce movement illusions, while co-
vibration of these tendons at different frequencies did. Moreover, in the case with different
frequencies, the velocity of the perceived movement depended on the difference of the vibration
frequencies applied on the tendons of each muscle group. We thus designed and compared three
different conditions. In two illusion conditions, covibration at different frequencies of the tendons
of the right wrist flexor and extensor muscle groups evoke d perception of slow or fast illusory
movements. In a no illusion condition, covibration at the same frequency of the tendons of these
antagonist muscle groups did not evoke a sensation of movement. To homogenise as much as
possible the energy of stimulation in all conditions, we kept the sum of the vibration frequencies
constant across conditions.

Results and conclusion


Results showed activation of most cortical areas involved in sensorimotor control in both
illusion conditions. However, in most areas, activation tended to be larger when the movement
perceived was faster. In the no illusion condition, motor and premotor areas were little or not
activated. Specific contrasts showed that perception of an illusory movement was specifically
related to activation in the left premotor, sensorimotor, and parietal cortices as well as in bilateral
supplementary motor and cingulate motor areas. We conclude that activation in motor as well as in
parietal areas is necessary for a kinaesthetic sensation to arise, possibly through an interaction
between the angular gyrus and the primary motor cortex. Taken as a whole, these data suggest that
the perception of illusory movement would be related to the activation of a fronto-parietal network,
all the structures of the circuit being tightly functionally related.
216
3rd International Conference on Enactive Interfaces (Enactive /06) 217
Evaluation of a haptic game in an immersive environment

Emanuele Ruffaldi, Antonio Frisoli & Massimo Bergamasco


Scuola Superiore S. Anna, Italy
pit@sssup.it

Introduction
This paper presents the experimental resultats of an immersive haptic game in which the users
have the possibility of playing pool using a haptic interface in a stereographic system. This work
extends the previous work on haptic pool with a desktop interface moving it into an immersive
environment and evaluating the results with a number of users. Moreover it shows the scalability of
the HapticWeb (Bergamasco, Ruffaldi and Avizzano, 2006) toolkit that can be applied as well to
desktop system as to immersive solutions.

The pool application


HapticWeb is a script based framework for the development of haptic enabled application that
are almost independant from the specific haptic interface18and it provides enough flexibility for its
extension with additional modules, a feature that is fundamental for the creation of complete
multimodal apllicationss. This framework completes the initial work outlined in (Ruffaldi, 2003),
where the core features and the main differences were the use of the Lua scripting language and
lower quality both on the graphics and the haptic rendering.

A complete example of application using HapticWeb is the Haptic Pool, that allows to play
Billiards using a haptic interface. This example integrates the dynamic simulation of the pool table
with the haptic feedback using the HapticWeb framework described above. The haptic interface is
used for impressing force and direction to the balls, and also for changing the point of view of the
player, using the direct rendering of the forces. Figure 1 shows the application while playing with
the GRAB device.

Figure 1. Example of the Haptic Pool application in which the GRAB device is being used.

The application is enhanced with audio feedback to provide the sound of collisions between
the balls with the cushions and other balls. The user decides the hit direction through the haptic
interface; then by pressing a button on the device, a virtual sliding is implemented that constraints
the cue to move only forward and backward along a line aligned with the hit direction and through a
point p of the ball, that represents the hit point. Basically the force-feedback is computed as an
impulse force assumed proportional to the hitting velocity, Fhit = kvhit. If T is the sampling time, an
impulse force Ihit = FhitT, is then applied to the ball at the position p where the cue hits the ball, the
linear and rolling initial conditions of the dynamics of the ball are given by:

1
Being based on CHAI3D, HapticWeb supports most commercial kinesthetic devices based on impedance control
218

The hit point p can be changed by the user through the arrow keys to implement
underspinning effects, see for instance the green point in figure 3. Billiard cloth is implemented
through static μs and dynamic friction μd properties, and with an additional constant force term
Fel= k2mg proportional to the ball weight, that models the dissipation of energy due to the elasticity
of the cloth under the weight of the balls.

Then the free dynamics of the ball is computed to determine the evolution of position of the
ball over time, until collisions either with other balls or cushions happen. In static conditions we
have, indicating with R the ball radius, and by considering the moment equilibrium equation at the
contact point

while in dynamic conditions, with sliding occurring between the ball and the cloth

The collisions are modeled with simple geometry reflection rules and conservation of
momentum, but considering a restitution coefficient that is a function of the material of the colliding
objects, modeling dissipating phenomena in the collision. Cushions are modelled with suitable
height and contact radius, in order to predict the correct collision behavior. All the dynamics is
implemented through the Novodex dynamic simulation engine.

In figures 2 and 3 the possibilities offered by the application are shown, like real time collision
detection capability and dynamics with modeling of rolling of balls, and possiblity of applying
spinning effects when hitting the balls, by varying the point of application of force p.

Figure 2. A sequence of snapshots of the poolingdemo application


3rd International Conference on Enactive Interfaces (Enactive /06) 219

Figure 3. Possibility of adding spinning effects whilehitting the ball

An on-line demo of the haptic pool game is available on


https://www.enactivenetwork.org/MMM/page3.html. The demo to work properly requires a MS
Internet Explorer browser and an haptic interface supported by HapticWeb. The same web-page
shows how the hapticweb 3D interactive technology can be easily integrated within a standard html
web-page.

Evaluation in a immersive environment


The above pool application has been extended for playing in an immersive environment using
a power wall with passive stereography. This work addresses the problems of adaptation of the
interaction from the desktop to the immersive environment while maintaining the same haptic
interface. Such application has been tested with several users by using the game score and time as a
benchmarking variables. The paper terminates presenting the results of such evaluation identifying
the problems caused by the different location of the visual stimulus and the haptic stimulus (Jansson
and Ostrom, 2004; Swapp, Pawar and Loscos, 2006).

References
Bergamasco, M., Ruffaldi, E., & Avizzano, C. A. (2006). Enactive and internet applications: A first prototype of
enactive application that interoperates haptic devices and the world wide web. 2nd Enactive Workshop, Lecture
description, May 25-27 2006, McGill University, Montreal, QC Canada.
Jansson, G., & Ostrom, M. (2004). The Effects of Co-location ofVisual and Haptic Space on Judgments ofForm.
EuroHaptics 2004.
Ruffaldi, E. (2003). Scripting for setup of haptic experiments. Technical Report Diploma di Licenza, PERCRO, Scuola
Superiore S.Anna.
Swapp, D., Pawar, V., & Loscos, C. (2006). Interaction with co-located haptic feedback in virtual reality. Virtual
Reality, 10(1), 24–30.
220
3rd International Conference on Enactive Interfaces (Enactive /06) 221
Manipulability map as design criteria in systems including a haptic device

Jose San Martin1 & Gracian Trivino2


1
Universidad Rey Juan Carlos, Spain
2
Universidad Politecnica, Spain
jose.sanmartin@urjc.es

As a part of the development of a new haptic device in the role of Minimal Invasive Surgery
trainer, we have needed of a tool in order to value its performance in a required task. We have to
check that it operates in its optimal workspace. One of the characteristics that define the
performance of a haptic device is the manipulability µ. A definition is that manipulability is the
efficiency with which a manipulator transmits force and velocity to its End Effector (Staffetti et al.
2002).

In this paper we are going to accomplish a study about the calculation of a map, in which to
each of the points of the workspace of the device, we assign a value of manipulability. In this way
we establish as design criteria to obtain a drawing that identifies the best zones of functioning,
therefore those in which we wish that our manipulator preferably works. We select PHANToM
OMNi of company SensAble for this work.

We will study the constructive characteristics of the OMNi that in order to simplify the work,
extends only to the final point (End Effector) where force feedback is transmitted, eliminating the
three gimbal elements. Afterwards we need to make an analysis of the kinematics behavior of the
OMNi. This study is parallel to the works about the PHANToM version 1.5 already presented by
different authors (Cavusoglu et al. 2002). Kinematics study is going to include three steps. The first
one is to solve the forward kinematics problem; it involves determining position of End Effector in
relation to the Coordinate System Origin (CSO), known the angles of the joints and the geometry of
the device. We obtain the transformation matrix from End Effector regard to the CSO:

⎛ cos(θ1 ) − sin(θ1 ) sin(θ 3 ) cos(θ 3 ) sin(θ1 ) l1 cos(θ 2 ) sin(θ1 ) + l 2 sin(θ1 ) sin(θ 3 ) ⎞


⎜ ⎟
⎜ 0 cos(θ 3 ) sin(θ 3 ) − l 2 cos(θ 3 ) + l1 sin(θ 2 ) ⎟
⎜ − sin(θ ) − cos(θ ) sin(θ ) cos(θ ) cos(θ ) l cos(θ ) cos(θ ) + l cos(θ ) sin(θ ) ⎟
⎜ 1 1 3 1 3 1 1 2 2 1 3 ⎟
⎜ 0 0 0 1 ⎟
⎝ ⎠

Coordinates of the End Effector of the manipulator referred to CSO:

x= (l1 cosθ 2 + l2 sin θ 3 ) sin θ1 ; y= l1 sin θ 2 − l2 cosθ 3 ; z= (l1 cosθ 2 + l 2 sin θ 3 ) cosθ1

The second is to solve the kinematics inverse problem, determining the configuration that
must adopt the haptic device for a known position of the End Effector.

+ l12 − l 22
) with (H2=x2+z2); θ3 = atang ( H − l1 cosθ 2 );
2
θ1 = - atang ( x ); θ2 = atang ( y
)+arcos ( L
z H 2l1 L l1 sin θ 2 − y

The third step is to solve the differential model (Jacobian Matrix J) establishing the relations
between angular velocities of the joints and those of the End Effector of the device. Then, we can
calculate the index of manipulability. The calculation of the manipulability from the Jacobian will
determine that the real workspace of the device does not present wells of singular points in its
habitual zone of functioning.
222
We determine the value of µ and achieve a map of manipulability of the workspace with
curves of iso-manipulability, that is, curves that join the points with equal values of µ.

The formulation of the manipulability value is (Cavusoglu et al. 2002; Tavakoli et al. 2004)
defined like μ = σmin(Ju)/σmax(Ju), where σmin y σmax are the minimum and maximum singular
values of the matrix Ju, upper half of Jacobian matrix. We introduce manipulability study with the
calculus of velocity Manipulability Ellipsoids for plane x=0. After this, we represent manipulability
map for a value of x=0/θ1=0 (fig.1). We will complete the study with values of manipulability for
Y=0 and Z=0 planes.

Values of manipulability have only sense for these points inscribed inside the real workspace
of the manipulator. So we must obtain the section of the map of manipulability that corresponds
with this space by projecting the real workspace of the OMNi on map illustrated in fig.1. The
resultant section (fig2) shows, that the chosen area has the best values of manipulability. We have
found an optimum manipulability zone of the OMNi device at inter-arms angle values of l1 y l2
near to 90º, coinciding with the central area of curves in fig. 2, with an optimum value in the upper
zone of the map. We check this point representing the graphics of μ - inter-arms angle that presents
a maximum around 90 degrees.

Figure 1. Isomanipulability curves map for Figure 2. Subplace of manipulability


x=0 plane. defined for the real workspace.

It is desirable that the Workspace, overlaps with the best manipulability values zone of the
OMNi as design criteria of an application. By selecting the location of the OMNi properly we will
improve the performance of the manipulator, increasing efficiency of its transmission of velocity
and torque to the force feedback point End Effector.

References
Cavusoglu, M., Feygin, D., & Tendick, F. (2002). A Critical Study of the Mechanical and Electrical Properties of the
PHANToM Haptic Interface and Improvements for High Performance Control. Presence: Teleoperators and
Virtual Environments, 11(6), 555-568.
Staffetti, E., Bruyninckx, H., & De Schutter, J. (2002). On the Invariance of Manipulability Indices. In J. Lenarcic and
F. Thomas (eds.), Advances in Robot Kinematics, pp 57-66. Kluwer Academic Publishers.
Tavakoli, M., Patel, R. V., & Moallem, M. (2004). Design Issues in a Haptics-Based Master-Slave System for
Minimally Invasive Surgery. Proceedings of the 2004 IEEE International Conference on Robotics and
Automation (ICRA '04), pp. 371-376, New Orleans, LA.
3rd International Conference on Enactive Interfaces (Enactive /06) 223
The roles of vision and proprioception in the planning of arm movements

Fabrice Sarlegna1, 2 & Robert Sainburg2


1
UMR Mouvement & Perception, University of the Mediterranean, France
2
The Pennsylvania State University, State College, USA
sarlegna.fabrice@lycos.fr

To reach accurately toward a visual target, information about the initial hand position appears
to be critical (Desmurget et al., 1998). This information can be determined through vision and
proprioception, yet it is still not well understood how these two sources of information are
combined for controlling goal-directed arm movements. Previous studies have proposed that
relatively simple rules can describe the processes of multisensory integration used to plan the
direction of reaching movements (Sober & Sabes, 2003). However, Sober and Sabes (2005)
recently reported that these rules can differ, depending on the sensory nature of the target. However,
the control of direction and distance have been shown to reflect distinct neural processes (Paillard,
1996; Riehle & Requin, 1989) In fact, we previously showed that visual and proprioceptive
contributions to the control of reaching movements differ for movement distance and direction, and
can also differ between the planning and online phases of control (Bagesteiro et al., 2006; Sainburg
et al., 2003). The purpose of the present study was to investigate whether the sensory nature of a
target will influence the roles of vision and proprioception in the planning of movement distance. In
addition, we examined whether the contributions of each modality vary throughout the time course
of the movement.
Two groups of subjects made rapid, single-joint elbow movements. Subjects either aimed
toward a visual target or toward the index fingertip of the unseen opposite hand. Visual feedback of
the pointing index fingertip was only available before movement onset. Using a virtual reality
display, we randomly introduced a discrepancy between actual and seen fingertip location to assess
the sensory contributions to distance planning (see Sarlegna & Sainburg, in press, for a complete
presentation of the experiment).
The results indicated that the influences of vision and proprioception changed substantially
with the target modality. Figure 1(A, B) shows that for the visual target, movement distance varied
greater with visual information about initial hand position, supporting our recent results (Bagesteiro
et al., 2006). For the proprioceptive target, movement distance varied greater with proprioceptive
information. In fact, our results indicated that visual and proprioceptive contributions did not
change significantly throughout the movement (see Figure 2). The influence of target modality on
peak acceleration led us to conclude that the sensory contributions to distance planning vary with
target conditions, the role of each modality being weighted so that planning processes rely mostly
on the modality in which the target is presented. We suggest that this flexibility reflects an
optimization process limiting sensori-motor transformations between heterogeneous frames of
reference. On the other hand, despite the dominance of one modality, the other modality contributed
significantly as initial hand position was derived from a combination of both modalities,
presumably to optimize the hand localization process.

Figure 1. A. Top view of representative single joint, elbow extension movements toward a
visual target.
224

Figure 1. B. Averaged movement distance as a function of starting location under the different
visuo-proprioceptive conditions. Error bars represent between-subject standard errors. C.
Representative acceleration profiles. Insets represent averaged peak acceleration. Error bars
indicate between-subject standard errors.

Figure 2. Time course of the mean sensory (visual and proprioceptive) contributions to
distance control. Error bars represent within-subject variability (standard deviation of the
mean).

References
Bagesteiro, L. B., Sarlegna, F. R., & Sainburg, R. L. (2006). Differential influence of vision and proprioception on
control of movement distance. Experimental Brain Research, 171, 358-370.
Desmurget, M., Pelisson, D., Rossetti, Y., & Prablanc, C. (1998). From eye to hand: planning goal-directed movements.
Neuroscience and Biobehavioral Review, 22, 761-788.
Paillard, J. (1996). Fast and slow feedback loops for the visual correction of spatial errors in a pointing task: a
reappraisal. Canadian Journal of Physiology and Pharmacology, 74, 401-417.
Riehle, A., & Requin, J. (1989). Monkey primary motor and premotor cortex: single-cell activity related to prior
information about direction and extent of an intended movement. Journal of Neurophysiology, 61, 534-549.
Sainburg, R. L., Lateiner, J. E., Latash, M. L., & Bagesteiro, L. B. (2003). Effects of altering initial position on
movement direction and extent. Journal of Neurophysiology, 89, 401-415.
Sarlegna, F. R., & Sainburg, R. L. (in press). The effect of target modality on visual and proprioceptive contributions to
movement distance control. Experimental Brain Research, in press.
Sober, S. J., & Sabes, P. N. (2003). Multisensory integration during motor planning. Journal of Neuroscience, 23, 6982-
6992.
Sober, S. J., & Sabes, P. N. (2005). Flexible strategies for sensory integration during motor planning. Nature
Neuroscience, 8, 490-497.
3rd International Conference on Enactive Interfaces (Enactive /06) 225
Choreomediating kinaesthetic awareness and creativity

Gretchen Schiller
Brunel University, West London, UK
Gretchen.schiller@brunel.ac.uk

My practice as a choreographer has extended into the domain of performative interactive arts
where the general public participates in multimodal, multisensual and responsive choreomediated1 9

environments with their bodies. These interactions are grounded in participatory embodiment, that
is, the media (video, sound, screens, robotics and light) are created to specifically elicit kinaesthetic
responses in the body of the public. These multimodal technical interactive systems generate a
continual relationship or “dance” between the performative participant’s body (korper and leib) and
the environment. Here the participant enacts a triphasic participatory role as “dancer” as “audience”
and to some extent as “choreographer”.

These projects shift the locus of attention away from dance and technology as separate
subjects of discourse and focus instead on the relational and differential movement dynamics which
take place between them. Through highlighting these “dynamic” transactions, the locus of
discussion is re-directed away from the dichotomous “body” as tool-user/subject on one hand, and
“technology” as interface/object on the other (nature/culture, human/machine).

These embodied transactions I describe as a kinesfield2. The kinesfield describes the body-
10

medium as a temporal-spatial dynamic based on interactive processes of feedback which take place
between the body and its environment. The kinesfield builds on experimental embodied
methodologies from dance and movement awareness methods as well as historical theoretical
descriptions of the body (Table 1).

Table 1: Body-Medium Frames of Reference and Definitions

Date Name Discipline Space


Laban’s Geometric and based on
1920s Kinesphere- Infinite Dance physical body reach of limbs in
Space space.

Merleau-Ponty’s Space as an extension of flesh,


1964 Philosophy
“flesh space” fleshspace.

Simondon’s Philosophy and


1965 Objects embody bodily efforts.
“transindividual” Sociology

Jonas’s biological
1966 Philosophy Organism-lifeworld.
phenomenology
Anthropology
Birdwhistell’s
1968 Sociology Social and contextual.
Kinesics
Linguistics
Preston Dunlop’s
Intersecting spaces between the
1978 Shared Space and Dance
public, theatre and performer.
Extended Kinesphere

Gibson’s ecological Biological – organism


1979 Physiology
model interrelated in ecology.

Lefebvre’s practico- Body as producer and


1991 Sociology
sensory produced by space – rhythmic.

1
The term Choreomedia emerges from my media and dance based background with such projects as Pas (2006),
Trajets phase one (2000-02) and Trajets phase two (2004-06), Shifting Ground (1999), Suspended Ties (1997), Camarà
(1996), Euguelionne: the body of the possible (1995), Face à Face (1994). See http://www.mo-vi-da.org.
2
See ‘The Kinesfield: A study of movement-based interactive and choreographic art’. PhD thesis University of
Plymouth. School of Computing, CAIIA-STAR 2003.
226
This presentation will illustrate the choreomediated interactive projects trajets and pas3. It 11

will show the ways in which they strategically alter the participant’s habitual experience of weight,
force, time and/or space between their bodies and the environment. It will reveal the ways that
technological feedback is choreomediated to introduce movement experiences which oscillate
between “familiar” movements or (what one anticipates) and “foreign” or destabilising movements.
The tension between familiarity and foreign is considered a technique in destabilising the public’s
sensibility to help them sense their embodied state and also discover other forms of fictive bodily
inhabitance such as magnetism, non-gravity, local and distributed presence and inscription.

The questions this presentation would like to address are: How can these choreomediated
processes encourage the general public to listen closer to the kinaesthetic voices and at the same
time generate new forms of creativity and expression? What are their qualitative characteristics
which distinguish them from historic paradigms in stage dance, video dance and multi-media? Is it
possible to surrender historic notions of ‘what is dance’ or ‘what is body’ and explore the intuitive
kinaesthetic creativity of the general public?

3
All of these works are created with a team of experts in the fields of dance, computer science, video, sound and
engineering. The projects presented at the conference involve the contributions of: Susan Kozel (co-director), Gretchen
Schiller (co-director), Scott Wilson (interface design and programmer), Pablo Mochkovsky (engineer), Robb Lovell
(computer scientist), Ben Roadley (intern from ECE) and RothKimura (architects). Funding and support for different
phases of the research of these projects has been made possible thanks to mô-vi-dä, mesh, Incult, Brunel University,
Simon Fraser University, ECE, Banff Centre of Arts and Canada Council, and the Arts Council of England.
3rd International Conference on Enactive Interfaces (Enactive /06) 227
Grasping affordances

Joanne Smith1,2 & Gert-Jan Pepping,2,3,4


1
Moray House School of Education, University of Edinburgh, Scotland
2
Center for Human Movement Sciences,
3
University Center for Sport, Exercise and Health
4
Center for Sports Medicine, University Medical Center Groningen,
University of Groningen, The Netherlands
Joanne_smith@education.ed.ac.uk

An affordance, such as graspability of an object, is determined by the fit between properties of


the environment and properties of an action system. In grasping, as the fit between object and hand
properties are varied optimal points emerge, where a given grip configuration is most comfortable
or efficient, and critical points emerge where the limits on an action are reached causing a shift to
an alternative grasping movement. Cesari and Newell (2002) have formalised the invariant body-
scaled information which predicts grip transitions. Furthermore, Van der Kamp, Savelsbergh and
Davis (1998) examined the nature of the phase transition, concluding that a discontinuous phase
transition exists as the system becomes unstable and a shift to a qualitatively different action occurs.
Literature on grip configuration has examined coordination of the digits from the moment of
object contact onward (Jordan, Pataky, & Newell, 2005; Sharp & Newell, 2000). In contrast, here
the aim was to examine the coordination between digits during hand transport and closure prior to
object contact. The question addressed was whether 2-digit and 3-digit grips that are qualitatively
different from contact onwards, demonstrate quantitatively different kinematics and coordination
between digits prior to contact.

Method
A simple reaching task was performed to a series of 21 different sized cubes. Cubes ranged in
size from 1cm-3cms with small inter cube intervals of 1mm, to illicit a spectrum of 2- and 3-digit
grip configurations (Cesari & Newell, 2000). Six participants performed eight trials to each cube
size, with no instructions regarding the type of grip configuration to be adopted. In addition to static
body-scaled measurements used to normalise the point of transition, movements of the object and
each digit of the reaching hand were recorded at 500Hz. Analysis was conducted using ANOVA to
distinguish the effects of cube size and grip selection on the dependent measures of movement time,
peak hand velocity and duration of deceleration; all commonly employed in measuring accuracy
requirements of reaching tasks. To measure coordination between the digits, index finger-thumb
peak aperture (ITPA) and middle finger-thumb peak aperture (MTPA) were analysed and the
relationship between them established using regression analysis. It was predicted that despite the
manipulation of cube size an effect of grip configuration would be found on the dependent
measures analysed.

Results and Discussion


All participants utilised both 2- and 3-digit grip configurations. The profiles of grip selection
for each participant can be seen in Figure 1. All participants shifted to a preferred 3-digit grip
configuration at a cube size/index-thumb range of motion (ROM) ratio of between 0.095 and 0.16 –
apart from one participant that did not shift to a preferred 3-digit grasp (cf. Newell, Scully,
Tenenbaum, & Hardiman, 1989).
ANOVA revealed no significant effects of grip configuration, however there were significant
effects of cube size on movement time, F(2,76) = 13.51, p<.05; duration of the low-velocity
deceleration phase, F(2,76) = 10.68, p<.05; ITPA, F(2,76) = 60.30, p<.05; and finally MTPA,
F(2,76) = 4.19, p<.05. Analysis of the coordination between digits identified a strong relationship
between peak apertures in the 3-digit grips (R2 = 0.90), where an effect of cube size resulted in a
linear increase in both peak apertures (Figure 2). However there was no clear relationship between
228
peak apertures in the 2-digit grips, that is, ITPA increased linearly with cube size whilst MTPA was
rather more variable, particularly in the smallest cube sizes.

1) 1 2) 120 2 Digit
3 Digit
110

Peak Middle-Thumb Aperture (mm)


Proportion of 3 Digit Grasps

0.75 100

90 2
R = 0.90

0.5 80

70

0.25 60

50

0 40
0.050 0.070 0.090 0.110 0.130 0.150 0.170 0.190 0.210 40 50 60 70 80 90 100
Cube size(cm)/ ROM Index-Thumb(cm) Peak Index-Thumb Aperture (mm)

Figure 1. 1) Profiles of grip selection for each participant displayed as a proportion of 3-digit
grasps and normalised for participant’s index-thumb ROM. 2) Scatter plot for a single
participant of ITPA against MTPA.

Conclusion
From the present analysis we were unable to identify digit kinematics prior to contact that are
unique to, or can distinguish, the differing kinematics observed at contact between 2- and 3-digit
grip configurations. That is, the obvious transition after contact is not as apparent in the kinematics
prior to contact. Results suggest that ITPA is affected by cube size in a continuous manner which
does not appear to undergo sudden shift between 2- and 3-digit grip configurations. Moreover, in
approximately half of the 2-digit grasps where the middle finger does not make contact with the
object, MTPA is comparable to that of a 3-digit grasp. This stresses the evolving nature of the
grasping action and functional relationship between digits.
Afforded actions may not be evident or indeed determined from movement onset. Instead
enactive knowledge evolves throughout the action resulting in affordance adaptation, which is
central to the ongoing perceptual guidance of movement. This ability to adapt is an important
consideration when designing enactive interfaces. To accurately replicate interaction with physical
objects caution must be taken when applying the concept of critical action boundaries and phase
transitions to skills requiring manual interaction.

References
Cesari, P., & Newell, K. M. (2000). Body-scaled transitions in human grip configurations. Journal of Experimental
Psychology-Human Perception and Performance, 26(5), 1657-1668.
Cesari, P., & Newell, K. M. (2002). Scaling the components of prehension. Motor Control, 6(4), 347-365.
Jordan, K., Pataky, T. C., & Newell, K. M. (2005). Grip width and the organization of force output. Journal of Motor
Behavior, 37(4), 285-294.
Newell, K. M., Scully, D. M., Tenenbaum, F., & Hardiman, S. (1989). Body Scale and the Development of Prehension.
Developmental Psychobiology, 22(1), 1-13.
Sharp, W. E., & Newell, K. M. (2000). Coordination of grip configurations as a function of force output. Journal of
Motor Behavior, 32(1), 73-82.
Van der Kamp, J., Savelsbergh, G. J. P., & Davis, W. E. (1998). Body-scaled ratio as a control parameter for prehension
in 5- to 9-year-old children. Developmental Psychobiology, 33(4), 351-361.

Acknowledgements
This study was written whilst the first author was supported by the Economic and Social
Research Council (ESRC) award PTA-030-2003-00320.
3rd International Conference on Enactive Interfaces (Enactive /06) 229
An innovative portable fingertip haptic device

Massimiliano Solazzi, Antonio Frisoli, Fabio Salsedo & Massimo Bergamasco


PERCRO, Scuola Superiore S. Anna, Italy
a.frisoli@sssup.it

A new 5 DOFs fingertip interface is presented for the haptic interaction in virtual
environments. It’s demonstrated (Hayward, 2004; Niemeyer et al., 2004) that during the exploration
and recognition of shapes are important the orientation of the object surface and the location of the
contact point on the fingerpad. The new haptic device has three translational DOFs and two
rotational DOFs, that allow a platform to come in contact with different orientation, direction of
approach and at different fingerpad locations. Varying these parameters it has been carried out an
experimental test, in order to determine the human discrimination threshold for the contact
orientation on the fingertip.

The general principle of working of the interface


The device can be fixed to the user’s finger and is endowed with an actuation system that
controls the motion of the platform, to bring it in contact with the fingertip. In this way the mobile
surface is connected to a link that is fixed around the phalanx of the user’s finger, and we can
design an encountering haptic device where the path planning problem is no more an issue, since
the relative position of the phalanx with respect to the base link does not change over time.

Design of the interface


In order to allow the user to support the weight of
the device on its finger and to avoid interference of the
device with the rest of hand, reduction of bulk and weight
was addressed in the design of the device. In accordance
with this issue, the actuation group, constituted by five
brushed DC motors, is placed in a remote position with
respect to the structure. The actuation of the system is
realized by cables guided inside flexible sheaths, which
start from motor pulleys and reach driven joints.

Figure 1. Prototype of the device

The kinematic structure should position the final plate in a predefined workspace around the
finger, such that in any position, the final plate could be oriented with respect to the finger in a
given range of angles. These requirements are satisfied by a structure with at least 5 DOFs. It was
chosen a hybrid solution, with a translational parallel stage followed by a rotational stage. The
translational stage is composed of three legs with two universal joints and one rotational pair on the
elbow joint that is supposed to be actuated for each leg; the axes of the two universal joints are
parallel to each other. The cable connected to the motor and a compression spring are mounted
aligned with the Leg Actuation Axis (the axis that intersect the centres of the universal joints):
clearly since the tension cable should be always positive, the compression spring works in a
opposition with the motor, so that a minimum pre-load is always guaranteed on the cable. While the
external forces are equilibrated only by the cable or the spring action, the external moment applied
on the platform is balanced by the constraint moment for each leg, transmitted to the base only
through the leg links. Figure 1 shows a first prototype of the device.
230
Control of the device
The control was implemented with local position controllers at the joint level. An inverse
kinematic module was used to convert the desired position expressed in cartesian coordinates to the
corresponding joint coordinates. The non-linear term due to the spring pre-load was pre-
compensated, by adding it in feed forward to the motor torque in the control loop. Similarly is
compensated the friction of the sheath, modelled according to the theory of belt.

Perception experiments
In order to evaluate the human
discrimination threshold for the orientation
of the surface in contact with the fingertip,
we carried an experimental test applying the
method of constant stimuli. To determine
the difference threshold using this method,
different pairs of stimuli are presented in
random order to the observers; one of the
stimuli of the pair is given a fixed value and
is called the “standard stimulus”. The task of
the observer is to judge if he’s able to detect
the difference between the two stimuli of the
pair. Every pair is presented much time and
the percentage of positive response is
calculated. The difference threshold that has Figure 2. Sigmoid curve for difference threshold
the 50% percentage is the JND (just
noticeable difference).
In our case the stimulus is the orientation of the surface in respect of the fingertip during a
contact; the possible difference between the orientations changes from 1 degree to 30 degrees. The
direction of approach is always perpendicular to the fingerpad, directed to the centre of the
fingertip.

Conclusions
The test was presented to some subjects, and the JND for each observer resulted
approximately 9 degrees. The experimentation pointed out the possibility to render the surface
orientation during the haptic interaction only by the display of local contact geometry. Future
developments can be the integration with other traditional haptic interfaces and the analysis of the
importance of local contact geometry versus kinesthetic information for shape recognition in virtual
environments.

References
Hayward., V. (2004). Display of haptic shape at different scales. Proceedings of Eurohaptics, pp 20–27.
Klatzky, R. L., Klatzky, R. L., Loomis, J. M., Lederman, S. J., Wake, H., & Fujita, N. (1993). Haptic identification of
objects and their depictions. Perception & Psychophysics, 54(2), 170–178.
Klatzky, R. L., Lederman, S. J., & Metzger, V. A. (1985). Identifying objects by touch: An “expert system”. Perception
& Psychophysics, 37(4), 299–302.
Lee, M. V., Vishton, P. M., Salada, M. A., & Colgate, J. E. (2002). Validating a novel approach to rendering fingertip
contact sensations. Proceedings of the 10th IEEE Virtual Reality Haptics Symposium, pp217–224.
Niemeyer, G., Cutkosky, M. R., Kuchenbecker, K. J., & Provancher, W. R. (2004). Haptic display of contact location.
Proceedings of the Symposium on haptic interfaces for virtual environment and teleoperator systems
(HAPTICS’04), pp40-47.
Sato, Y., Yoshikawa, T., Yokokohji, Y., & Muramori, N. (2004). Designing an encountered-type haptic display for
multiple fingertip contacts based on the observation of human grasping behavior. Proceedings of the Symposium
on haptic interfaces for virtual environment and teleoperator systems (HAPTICS’04), pp66-73.
3rd International Conference on Enactive Interfaces (Enactive /06) 231
Stability of in-phase and anti-phase postural patterns with hemiparesis

Deborah Varoqui1, Benoît G. Bardy1,2, Julien Lagarde1 & Jacques-Yvon Pélissier1


1
Motor Efficiency and Deficiency Laboratory, University Montpellier 1, France
2
Institut Universitaire de France, France
deborah.varoqui@univ-montp1.fr

Introduction
The aim of this study was to analyze the stability of postural coordination patterns in persons
with hemiplegia. Hemiplegic patients suffer from many postural deficits of different origins (e.g.,
neurophysiological, muscular, sensorial). Hemiparesis is characterized by an important postural
(left, right) asymmetry, with more degrees of freedom on the healthy side. One major aim of
rehabilitation is to improve the basic motor functions of the hemiplegic side, including its
contribution to the maintenance of autonomous stance. A good understanding of the post-stroke
movement disorders is essential to propose adequate postural rehabilitation programs.

In a coordination dynamics framework, we investigated the stability of postural patterns


between ankles and hips in a hemiparetic population. The relative phase φrel between angular
motion of these two joints was analyzed for this purpose. In healthy subjects, two preferred patterns
exist, in-phase (φrel ≈ 20°) and anti-phase (φrel ≈ 180°, Bardy et al., 1999). The emergence of these
preferential postural patterns, and their stability, depend on a coalition of constraints
(environmental, intentional and intrinsic). For the patients, we hypothesized that the less stable
pattern (the in phase pattern for posture) would disappear. This hypothesis was not elaborated on
the literature on postural pattern formation (inexistent in hemiparetic patients) but on related studies
in bimanual (e.g., Rice & Newell, 2004) or locomotory (Donker & Beek, 2001) patterns.

Method
18 subjects took part in the experiment. The healthy participants (N = 9) were placed in
postural asymmetry, i.e., they were instructed to put 70% of their body weight on one leg. The
hemiplegic group (N = 9) included participants engaged in post-stroke (6 left and 3 right
hemiparetic subjects) rehabilitation. All participant were instructed to produce two coordination
modes (0° and 180°) using Virtual Posture, a customized biofeedback system developed to study
postural pattern formation (e.g., Faugloire et al., 2005). The two patterns were visually presented to
the standing participants on a 1.50m (H) x 1.10m (V) projection screen, in the form of a oblique line
(positive for 0°, negative for 180°) plotted in a ankle–hip position plane (Lissajous figure).
Participants were asked to generate hip-ankle coordination corresponding to the displayed pattern,
with nine cycles (about 90 s) for each pattern. Angular motion of hip and ankle joints was measured
with two electro-goniometers placed on the participant’s hemiparetic leg. Data from the two
goniometers were also used to generate the real time visual feedback in the same ankle–hip
configuration plane that had been used to illustrate the requested relative phase pattern. Participants
were given a 3-min period to become familiarized with the connection between the graphic display
and their own body movements. Measures of coordination (i.e., ankle-hip relative phase and
standard deviation) and performance (i.e., constant error and absolute error between imposed and
produced pattern) were computed.

Results and Discussion


Two main results were obtained. First, absolute error was significantly higher for the
hemiplegic group than for the healthy group, but only for the in-phase pattern (Figure 1A). When 0°
was required, healthy participants produced around 0°, but patients produced a pattern close to
180°. Second, hemiplegic subjects were more variable than healthy subjects when performing 180°
(Figure 1B). For healthy subjects, the results are consistent with previous studies in spite of the
imposed postural asymmetry (Bardy et al., 1999; Faugloire et al., 2005).
232

A B
* * *
*
*

Pattern 0° Pattern 180° Pattern 0° Pattern 180°

Hemiplegic group

Healthy group

Figure 1. Absolute error between required and produced patterns (A) and standard deviation of
ankle-hip relative phase (B).

In conclusion, this preliminary study suggests dramatic changes in postural dynamics


accompanying post-stroke hemiparesis. The more stable pattern (180°) lost its stability, and the less
stable pattern (0°) disappeared. From a fundamental point of view, it seems that the entire stability
landscape of postural dynamics has been modified after the cardio-vascular accident and its
neurophysiological consequences. In a more applied perspective, the results of the present research
suggests potential causes of post-VCA supra-postural deficits (such as walking, reaching, etc..) in
the major changes observed in postural dynamics. Future research using Virtual Posture will test
the evolution of the entire postural repertoire (preferred coordinative states and their stability)
during post-stroke rehabilitation, and test the benefit of using this type of biofeedback during the
rehabilitation period.

References
Bardy, B. G., Marin, L., Stoffregen, T. A., & Bootsma, R. J. (1999). Postural coordination modes considered as
emergent phenomena. Journal of Experimental Psychology: Human Perception and Performance, 25, 1284-
1301.
Donker, S. F., & Beek, P. J. (2002). Interlimb coordination in prosthetic walking: effects of asymmetry and walking
velocity. Acta Psychologica, 110, 265-288.
Faugloire, E., Bardy, B. G., Merhi, O., & Stoffregen, T. A. (2005). Exploring coordination dynamics of the postural
system with real-time visual feedback. Neuroscience Letters, 374, 136-141.
Rice, M. S., & Newell, K. M. (2004). Upper-extremity interlimb coupling in persons with left hemiplegia due to stroke.
Archives of Physical Medicine and Rehabilitation, 85, 629-634.
3rd International Conference on Enactive Interfaces (Enactive /06) 233
Dynamical principles of coordination reduce complexity: the example of
Locomotor Respiratory Coupling

Sébastien Villard 1, Denis Mottet2 & Jean-François Casties2


1
Human Factors Research Laboratory, University of Minnesota, USA
2
Motor Efficiency and Deficiency Laboratory, University Montpellier 1, France
svillard@umn.edu

Introduction
Von Holst’s works (von Holst, 1937) emphasized the existence of global rules that capture the
basic principles in motor coordination. Such principles reduce complexity in such a way that control
remains manageable by a system with limited resources. Since these studies, researchers have
developed and improved their understanding about motor control, with the benefit of a dynamical
systems approach of non-linear coupled oscillators. The sine circle map model predicts a
hierarchical structure of the frequency ratios of two biological oscillators when the coupling
relationship changes (deGuzman & Kelso, 1991 ; Treffner & Turvey, 1993). Nevertheless, the use
of such a model seems to be delicate when the coordination is involved in a global task like
locomotion: the principles of stability of the frequency ratios do not seem to be respected in a gross
coordination such as the Locomotor Respiratory Coupling (LRC) (Amazeen, Amazeen, & Beek,
2001). Our aim is to check whether the synchronization theory can serve as a theoretical framework
to explain the behavior in specific LRC tasks, such as when the coordination between subsystems is
constrained by a global demand such as oxygen supply and mechanical energy production. In these
tasks, the preferred frequency is well known to characterize the mechanical efficiency of the
locomotion (Marsh, Martin, & Sanderson, 2000) and the most stable rhythmical behavior (Diedrich
& Warren, 1995). We make the assumption that the dynamical stability principles captured by the
sine circle map can explain this feature of preferred frequencies for both locomotion and respiratory
frequency.

Method
To test this hypothesis, 10 healthy males performed six 20 minute pedaling exercises at the
preferred frequency -30% (Fpref -), preferred frequency (Fpref), and the preferred frequency +30%
(Fpref +). These three frequency conditions were applied for the pedaling frequency (FP) and the
respiratory frequency (FR). Exercise intensity was set at 20% below the first ventilatory threshold,
and was adapted to maintain constant oxygen consumption in all experimental conditions. We
recorded the last 10 minutes of each condition to compute the most used frequency ratio for each
condition. This analysis of the frequency locking was complemented with an analysis of the relative
phase between each pedaling stroke and the relevant respiratory event. As an index of phase
locking, we computed the dispersion of the relative phase (dφ) for the mode of the frequency ratio
(McDermott, Van Emmerik, & Hamill, 2003).

Results and Discussion


This experiment confirms that we cannot verify our assumption with the predicted stability of
the used frequency ratios. We noticed that higher order frequency ratios could be used in the Fpref
conditions and in the same way low order ratios in the Fpref + and Fpref - conditions. This
phenomenon describes the adaptability of this system of two non-linear oscillators to change from
one synchronization region to another one, to fit with the task demand. In this case the main
constraint was to maintain the chemical homeostasis on the blood circuitry.
However, the use of a preferred pedaling or respiratory frequency was characterized by a
significantly lower dφ (Figure 1). This result reveals a stronger phase coupling in Fpref than the other
frequency conditions. Freely chosen frequencies spontaneously use an optimal detuning (difference
between the breathing and the pedaling frequencies) to stabilize each available mode locking.
234

Figure 1. Significant decrease of dφ when the preferred frequencies were use

Conclusion
We used the example of the LRC to show that the theory of non-linear coupled oscillators can
capture (part of) the organization of the behavior even in tasks where the global energetic constraint
is the most prominent. Furthermore, we showed that the use of the sine circle map prediction, such
as transitions between frequency ratios, needs to be associated to the investigation of the relative
phase. Finally, our work gives new evidences showing that the locomotor-respiratory system is
widely adaptable in term of frequency locking to fit the global energetic and mechanical demands,
but at the same time, we showed that the rules that drive these adaptations aim at an optimal
detuning that result in the most stable phase relation as possible, as predicted by the synchronization
theory.

References
Amazeen, P. G., Amazeen, E. L., & Beek, P. J. (2001). Coupling of breathing and movement during manual wheelchair
propulsion. Journal of Experimental Psychology: Human Percepetion and Performance, 27(5), 1243-1259.
deGuzman, G. C., & Kelso, J. A. (1991). Multifrequency behavioral patterns and the phase attractive circle map.
Biological Cybernetics, 64(6), 485-495.
Diedrich, F. J., & Warren, W. H., Jr. (1995). Why change gaits? Dynamics of the walk-run transition. Journal of
Experimental Psychology: Human Percepetion and Performance, 21(1), 183-202.
Marsh, A. P., Martin, P. E., & Sanderson, D. J. (2000). Is a joint moment-based cost function associated with preferred
cycling cadence? Journal of Biomechanics, 33(2), 173-180.
McDermott, W. J., Van Emmerik, R. E., & Hamill, J. (2003). Running training and adaptive strategies of locomotor-
respiratory coordination. European Journal of Applied Physiology, 89(5), 435-444.
Treffner, P. J., & Turvey, M. T. (1993). Resonance constraints on rhythmic movement. Journal of Experimental
Psychology: Human Perception and Performance, 19(6), 1221-1237.
von Holst, E. (1937). Vom wesen der ordnung im Zentralnervensystem (On the nature of order in the central nervous
system) (R. D. Martin, Trans.). In The behavioral physiology of animals and man: selected papers of E. von
Holst (University of Miami Press ed., Vol. 1, pp. 81-110). Coral Gables, FL.
3rd International Conference on Enactive Interfaces (Enactive /06) 235
Simulation validation and accident analysis in operator performance

Michael G. Wade & Curtis Hammond


School of Kinesiology, University of Minnesota, USA
mwade@umn.edu

The simulation of real world environments is an increasingly popular tool for almost every
form of research. Whether for environmental demonstration, simulation training or as in the case of
this report, a research tool. The use of synthetic environments has become almost commonplace and
research is being carried out at all levels using simulated environments. In spite of this upsurge little
research has been reported that actually validates the results of such simulated protocols. This report
is a step in that direction. First we report data that compared a virtual environment (VE) with a
comparable real world (RW) driving environment and second an analysis of driving performance
assessing the role of forward field of view (FLBS) blind spots in mediating vehicle crashes.

We sought a validation of our simulator to better understand the relationship between driving
behavior in a (VE) context compared with a real world (RW) driving environment on which the
simulation was based. This report provides information on the fidelity of driving performance
driving in a fixed base driving simulator. First a static set of responses were recorded whereby an
accelerometer was affixed to the head of the operator that recorded responses to optical flow
generated by a vehicle in motion in the on-road (RW) portion of the driving test. All test
participants completed a questionnaire that documented their experiences of the driving
environment including any nauseogeneic effects or feelings.

The dynamic portion of the validation study recorded operator performance of both the RW
and the VE settings. Direct measures of lane positioning were recorded, as was variation in road
width and road markings. In both the RW and the VE an appropriate number of parked cars, stop
signs, etc… were present in the driving scenario as to provide as reliable a replica of the RW
driving environment in the VE computer generated environment. The result of this study compared
variation in lane positioning in both environments addition to driving behaviors by participants such
as missed stopped signs, the positioning of the vehicles on turns, in order to assess the correlation
between the RW and VE.

A second study reports driving behavior in the context of “blind spots” induced by the A-pillar
of most vehicles that support the roof either side of the windshield. Forward looking blind spots
(FLBS) occur when the driver’s field of view is compromised as a result of the obscured line of
sight caused by the support pillar on either side of the windshield known as the A-pillar. Forward
looking blind spots are different from the traditional rear view blind-spot which is the unseen rear
area that falls between a vehicle’s rear view driving mirror and the mirror on the outside of the
vehicle. The FBLS produced by the A-pillar can cause two vehicles approaching to remain hidden
fro view for an extended period of time due to coincident deceleration or acceleration as vehicles
approach the intersection. This prolonged blind spot can have serious consequences as neither
driver is aware of the other vehicle on such a collision course. This study analyzed the relationship
between the “FLBS”, the approach speeds of two vehicles to an intersection at right angles, and
driver behavior relevant to such an accident likely.

The wrap around simulator at the University of Minnesota (WAS) is a large domelike
structure with a wrap around screen and multiple projectors that produce a forward field of view
image of approximately 130 degrees about the vehicle appropriately instrumented. At each
simulated intersection the participants scanning behavior of the environment was assessed relative
to safely crossing the intersection. Scanning behavior was scored in four categories: 1) eyes fixed-
peripheral vision only, 2) eyes only scanned-left/right, no head motion, 3) eye/head scan-head
236
rotates but no changing position, and 4) active scan-head moves around a left/right, forward/back
(looking around A-pillar). Participant’s reports were also recorded regarding target vehicle
acquisition. Acquisition increased as the level of scanning behavior improved and the collision rates
decreased maximally at the “active” scanning levels (3 and 4). Signage (yield) at intersections
produced no significant correlation with the acquisition rate, collision rate or scanning level.

Our data suggests that when considering automotive design, traffic systems and driver
behavior, the low level of the driver’s scanning behavior can lead to potentially dangerous driving
conditions. This issue applies to automotive and pedestrian traffic and explains to some degree why
pedestrians seem to “appear” in the road as a driver initiates a left or right turn.

Overall the validation study and the study to evaluate blind spots generated by the A-pillar of
most vehicles suggest that the use of a fixed base simulator utilizing a computer generated virtual
world has valid applications in evaluating driver behavior. Simulation of this nature is a low cost
means of accident analysis without requiring extensive in vehicle instrumentation to generate
simulate road noise, road motion, etc…Used as a tracking device driver behavior can be recorded
and evaluated for the purposes of both safety and traffic engineering design issues.
3rd International Conference on Enactive Interfaces (Enactive /06) 237
Catching and judging virtual fly balls

Frank T. J. M. Zaal, Joost D. Driesens & Raoul M. Bongers


Center for Human Movement Sciences, University of Groningen, The Netherlands
f.t.j.m.zaal@rug.nl

When using Virtual Reality, one might ask oneself how well virtual reality copies real reality.
For people using VR-techniques to understand human behavior, the reason to ask this question is
obvious: while using VR, are we really studying the thing that we intended to study (real behavior).
But similarly for people using VR for other purposes, one would like to exploit as much as possible
the natural repertoire of behavior capacities: staying as close as possible to natural behavior seems
to be most effective. Here we show results of experiments concerning the catching of fly balls,
experiments that we conducted using VR (a CAVE) as well as an experiment that used a
comparable set-up, but now without the coupling between movement of the observer and the
display.

Catching and judging fly balls in a CAVE


In a series of experiments, we had participants either judge the passing side of simulated
approaching fly balls or had them attempt to intercept those virtual balls using the CAVE’s wand
(Zaal & Michaels, 2003). One result worth mentioning here was the fact that participants were quite
unsuccessful in the latter task, although they did very well on the first task, which applied partly
identical stimuli. A later experiment, in which participants were instructed to intercept the virtual
ball with the forehead, showed much higher success rates. We attributed this difference in success
to the (natural) feedback that was absent in the first experiment (the wand did not allow any haptic
feedback) but was present in the later experiment, in which optical looming was informative about
the point where the ball had passed or hit the body.

Figure 1. The differences in head movement as a function of the task (judging vs. tracking).
Adapted from Zaal & Michaels (2003)
238

Figure 2. The effects of instructions to either track the virtual ball or to fixate the horizons on
the accuracy of perceptual judgments of future passing side.

A second result that seems interesting in the context of this meeting was the fact that judging
the future passing side seemed to involve a different use of (the same?) visual system: When asked
to try and intercept the virtual ball (although being unsuccessful), participants typically tracked the
ball with their gaze, whereas, when asked to give a perceptual judgment, they tended to keep
looking straight ahead (Figure 1). We concluded that one needs to be cautious when interpreting
data from perceptual-judgment studies to address issues pertaining real action (Zaal & Michaels,
2003).

The effect of moving the head on the accuracy of perceptual judgments


Recently, we performed a study in which we manipulated the tracking of the virtual fly balls
and studied its effect on the performance of judgments of passing side (Driesens & Zaal, 2006).
After a block of trials in which no instruction was given, participants were asked either to track the
virtual ball or to fixate the horizon. We found a clear effect of the instruction on the accuracy of the
judgments: Accuracy was significantly better when tracking than when fixating (Figure 2).
Furthermore, and in contrast to our earlier findings, in this set-up, without a coupling of observer
movement and the displayed motion, we did see that observers did track the ball with their gaze
when given no instruction.

Conclusion
We did find differences among different tasks (judging vs. intercepting) and different set-ups
(real VR vs. large screen display). A more systematic study of the sources of all these effects
seems important for the development of VR set-ups and their use.

References
Driesens, J. D., & Zaal, F. T. J. M. (2006). Influence of head movement on judging virtual fly balls. 9th European
Workshop on Ecological Psychology, Groningen, The Netherlands: July 5-8, 2006.
Zaal, F. T. J. M., & Michaels, C. F. (2003). The information for catching fly balls: Judging and intercepting virtual balls
in a CAVE. Journal of Experimental Psychology: Human Perception and Performance, 29, 537-555.
3rd International Conference on Enactive Interfaces (Enactive /06) 239
Zoomable user interfaces: Ecological and Enactive

Mounia Ziat, Olivier Gapenne, Charles Lenay & John Stewart


COSTECH, Department of Technology and Human Sciences,
Technological University of Compiègne, France
mounia.ziat@utc.fr

In information visualisation, zoomable user interfaces (ZUI) were developed in order to


navigate in a big information space. They have an infinite space and allow the manipulation of
infinite pans and zooms but the main drawback is the risk of getting lost in the information space.
Understanding how a human being perceived the scale changes and how he is living this
“zoomable” experience will help to avoid the user disorientation when he manipulate this kind of
interfaces. While basing on ecological and enactive theories, we will try to bring some elements of
responses in order to understand the navigation in ZUI.

Introduction
The far distance and the invisible have always fascinated the human being because he has seen
something which exceeds him, fascinates him and frightened him to some extent. From the moment
where the human equipped himself with instruments enabling him to approach the far distance and
to increase the invisible, he has widens his space of exploration. Since the advent of the optical
instruments, we pass from the “closed world” to “the infinite universe”; our personal space is
decompartmentalized and can extend ad infinitum dint to this abolition of the distances which
involve a compression of time at the same occasion. Thus, the individual falls in a new relation with
his world and organizes his perception according to these new elements which surrounds it. Thus,
the zoom is a “new” perceptual experience which was registered in the human collective with the
advent of the technique. Accordingly, to perceive is “to enact”, it is to “self-organize” according to
the external elements which come disturb our former perception (Varela, 1989; Varela, Thompson
& Rosch, 1993). Thus, human perception has changed and was self-organize with the advent of the
zoom which was bringing by powerful tools like microscopes, telescopes, cameras and also
computers.

Zoomable User Interfaces (ZUI)


ZUI were designed to visualize a large quantity of information within a limited space (that of
the screen), which remains problematic with a classical WIMP (Windows, Icons, Menus, Pointer)
display (Beaudouin-Lafon, 2000). PAD (Perlin & Fox, 1993) was one of the first ZUI realized by
Perlin and Fox in 1993. It led to PAD++ in 1994 (Bederson & Hollan, 1994) and Jazz in 2000
(Bederson, Meyer & Good). There space is infinite in length and width, which allows the user to
employ infinite pans and zooms in order to navigate in this multi-scale space. In contrast to a
geometrical zoom, the zoom used is a semantic zoom. The semantic content of the page is modified
with each scaling, i.e. the detail level and the object representation are different at each level of
zoom. But with “desert fog (Jul & Furnas, 1998), the user is lost in the space of scale and forget
his/her course during the navigation.

Zoom and Enaction


The perception which one has of an object in the world is relative and own to each. During the
navigation in a ZUI, the “zoomable” perceptual experience of the subject follows an enactive
process, i.e. the zoom takes its significance in the action of the subject which results from the
engagement of the subject in its activity. That is as if the subject is climbing a ladder and its rungs
were built during his climbing. This metaphor debriefs the concept of enaction (Varela, Thompson
& Rosch, 1993). The “zoomable” world is enacted by the subject, i.e. it is built in the action which
he builds with each level of zoom. A perceived world (real or virtual) is narrowly dependent on who
is perceiving it and who is engaged, at the same time, by his body and his spirit “...cognitive
240
faculties are inextricably related to the history of what is lived, in the same way that a path with the
non-existent precondition appears while walking" (Varela, 1989). This path which appears while
walking, while zooming in our case, is navigation on the axis of the scales (Furnas & Bederson,
1995).

Zoom and Ecology


The concept of affordance (Gibson, 1986) can be widened to a virtual world displayed behind
a computer screen. For zoomable user interfaces, they afford the depth since we have the
impression of “travel” in the dimension of the scale. Two zoomable experiences are possible: i) the
experience of the traditional zoom, which corresponds to an increasing of the object’s size, ii) the
experience of the continuous zoom (made possible with optical instruments, camera, etc.). When I
use this zoom, I consider that I approach the object or that it approaches of me without considering
that the object becomes larger relatively to me, i.e. I maintain “the size constancy” of the object.
That is as if I have shrunk the distance which exists between me and the object. The semantic zoom,
used in zoomable user interfaces enables me to live the two experiences. When I zoom on an object,
I handle some levels of zoom before the change of the semantic contents, i.e. the object does not
change appearance during a series of zoom. This first phase of handling is lived like an enlarging of
the object’s size. From the moment where the semantic contents is changing, I am aware of
perceiving the object with more details and thus of advancing in the depth for better perceiving it,
the object approaches of me (or I approach of it) and thus navigation is done in two steps: A
navigation "without depth" and a navigation "with depth" which can be considered as an extension
of my body on the axis of the scales. In other words, from the moment when I conceive a depth, an
axis of the scales, I live it as moving closer. But if I do not feel the depth, I live it more as an
expansion of the object. Having given access to the depth, a ZUI simulates the affordance of
locomotion since I have the impression of approaching the object (or the object is approaching of
me), whereas a "traditional" zoom do not affords displacement since I do not have this feeling of
navigation in the depth.

References
Beaudouin-Lafon, M. (2000). Instrumental interaction: An interaction model for designing post-WIMP user interfaces.
Proceedings of the ACM Conference on Human factors in computing systems, CHI’00, ACM Press, pp. 446-
453.
Bederson, B. B., & Hollan, J. D. (1994). Pad++: A zooming graphical interface for exploring alternate interface physics.
Proceedings of the 7th annual ACM symposium on User interface software and technology, UIST’94, ACM
Press, pp. 17-26.
Bederson, B. B., Meyer, J., & Good, L. (2000). Jazz: An Extensible Zoomable User Interface Graphics Toolkit in Java.
Proceedings of the 13th annual ACM symposium on User interface software and technology, UIST’00, CHI
Letters, 2(2), pp. 171-180.
Furnas, G., & Bederson, B. B. (1995). Space-scale diagrams: Understanding multiscale interfaces. Proceedings of the
ACM Conference on Human-Computer Interaction, CHI'95, ACM Press, pp. 234-241.
Gibson, J. J. (1986). The Ecological Approach to Visual Perception. Hillsdale, New Jersey: Lawrence Erlbaum
Associates.
Jul, S., & Furnas, G. W. (1998). Critical Zones in Desert Fog: Aids to Multiscale Navigation. Proceedings of the 11th
annual ACM symposium on User interface software and technology, UIST’98, ACM Press, pp. 97-106.
Perlin, K., & Fox, D. (1993). Pad: An Alternative Approach to the Computer Interface. Proceedings of the 20th annual
conference on Computer graphics and interactive techniques, SIGGRAPH’93, pp. 57-64.
Varela, F. (1989). Autonomie et connaissance : essai sur le vivant. Paris: Seuil.
Varela, F., Thompson, E., & Rosch, E. (1993). L'inscription corporelle de l'esprit: sciences cognitives et expérience
humaine. Paris : Seuil.
3rd International Conference on Enactive Interfaces (Enactive /06) 241

List of demos

Accelerating object-command transitions with pie menus


Mountaz Hascoët, Jérôme Cance (Montpellier, France)

Improving drag-and-drop on wall-size displays


Mountaz Hascoët, Maxime Collomb (Montpellier, France)

Enactic applied to sea state simulation


Marc Parenthoën (Brest, France)

Haptic-audio interaction/applications for blind people


Teresa Gutiérrez (Bilbao, Spain)

Percro enactive technology demo


Franco Tecchia, Davide Vercelli, Giuseppe Marino, Alessandro Nicoletti (Pisa, Italy)

DLR technologies for enaction


Carsten Preusche, Thomas Hulin (Oberpfaffenhofen, Germany)

Enactive reaching
Bruno Mantel, Benoît Bardy, Tom Stoffregen (Montpellier, France – Minneapolis, USA)

Enactive posture in post-stroke reeducation


Deborah Varoqui, Benoît Bardy, Elise Faugloire, Bruno Mantel (Montpellier, France)

Full body motion capture with an active optical system (video)


Ronan Boulic (Lausanne, Switzerland)

Enactive network, EES creativity Group: Soundraw


Roberto Casati (Paris, France)

Dynamical and physical modelling with CORDIS ANIMA


Annie Luciani, Claude Cadoz, Maxime Houot, Mathieu Evrard, (Grenoble, France)

Audio-haptic enactive applications


Charlotte Magnusson (Lund, Sweden)

Biometrics: AURION Zero Wire EMG for movement sciences


Albert Gaudin, Anne-Claude Gaudin (Orsay, France)
242
3rd International Conference on Enactive Interfaces (Enactive /06) 243

First Author Index

A G
Alvarez, Julian ………………. 99 Gangopadhyay, Nivedita ……. 61
Amamou, Yusr ………………. 101 Gaudin, Albert ………………. 241
Aznar-Casanova, José ……….. 103 Giveans, Marc Russell …... 63, 143
Glowczewski, Barbara ………. 65
B Goncharenko, Igor …………... 145
Grunwald, Arthur J. …………. 147
Bachimont, Bruno …………… 44 Guiard, Yves ………………… 69
Bendahan, Patrice ………….... 105 Gutiérrez, Teresa ……………. 241
Benyon, David ………….. 22, 24 Guyet, Thomas ……………… 71
Bernardin, Delphine ………… 49
Boulic, Ronan ................... 51, 241 H
Brady, Alan ………………….. 107
Hanneton, Sylvain ………….... 73
C Hascoët, Mountaz……………. 241
Hasuike, Kimitake ……………149
Cance, Jérôme .……………… 109 Hirai, Hiroaki ………………... 151
Cao, Da ……………………… 111 Houot, Maxime ……………… 153
Cardin, Sylvain ……………… 113 Hughes, Stephen …………….. 155
Casati, Roberto ……………… 241 Hulin, Thomas ………………..157
Collomb, Maxime …………… 115
Cornes, Natalie ……………… 117 I
Coutte, Alexandre …………… 119
Coutton-Jean, Cécile ………… 121 Ijsselsteijn, Wijnand ………… 25
Issartel, Johann ……………… 33
D
J
de Gotzen, Amalia …………... 123
Delignières, Didier ………….. 125 Jacquier-Bret, Julien ………… 159
Demougeot, Laurent ………… 127 Jacucci, Giulio ………………. 161
Descatoire, Aurélien ………… 129 Jean, Julie ……………………. 163
Descoins, Médéric ……………131 Jirsa, Viktor K. ……………… 42
D'Incà, Gianluca …………….. 133 Jungmann, Manuela …………. 165
Dionisi, Dominique …………. 53
K
F
Khatchatourov, Armen ……… 45
Fass, Didier ………………….. 135
Faugloire, Elise ……………… 137 L
Fitzpatrick, Paula ……………. 139
Flanagan, Moira ……………... 141 Largade, Julien ……………… 40
Florens, Jean-Loup ………….. 57 Lazzari, Stefano ……………... 167
Fu, Wai-Tat ………………….. 59 Lemoine, Loïc ……………….. 169
Lenay, Charles ………………. 37
Lobjois, Régis ……………….. 171
Luciani, Annie ………………. 241
244

M S
Magnusson, Charlotte …… 75, 241 Sallnäs, Eva-Lotta …………… 85
Mantel, Bruno ………….. 173, 241 San Martin, Jose …………….. 221
Mars, Franck ………………… 175 Sani, Elisabetta ……………… 87
Mazzarino, Barbara …………. 177 Sarlegna, Fabrice ……………. 223
McGann, Marek ……………... 179 Schiller, Gretchen …………… 225
Midol, Nancy ………………... 77 Schmidt, Richard ………... 30, 89
Milleville-Pennel, Isabelle …... 181 Shiose, Takayuki ……………. 91
Montagnini, Anna …………… 183 Smith, Joanne ……………….. 227
Morice, Antoine ……………... 185 Solazzi, Massimiliano ………. 229
Stoffregen, Thomas A. ……… 27
N
T
Navarro, Jordan ……………… 187
Tecchia, Franco ……………... 241
O
V
Olivier, Gérard ………….189, 191
Ollagnier-Beldame, Magali …. 193 Varni, Giovanna …………….. 93
Oullier, Olivier ……………… 31 Varoqui, Deborah ……… 231, 241
Villard, Sebastien …………… 233
P
W
Pacini, Adele ………………… 195
Papagiannakis, George ….. 26, 197 Wade, Michael ……………… 235
Parenthoën, Marc ………. 199, 241 Warren, William H. …………. 15
Pasquinelli, Elena …………… 201
Peinado, Manuel …………….. 203
Pelachaud, Catherine ………... 205 Y
Pepping, Gert-Jan ………….... 207
Pissaloux, Edwige E. ………... 209 Yokokohji, Yasuyoshi ………. 17
Pokluda, Ludìk ……………… 79
Preusche, Carsten …………… 241 Z

Q Zaal, Frank ………………….. 237


Ziat, Mounia …………….. 95, 239
Quinton, Jean-Charles ………. 213

R
Raymaekers, Chris …………... 81
Richardson, Michael J. ……… 35
Romaiguère, Patricia ………... 215
Rousseaux, Francis ………….. 83
Ruffaldi, Emanuele ………….. 217
3rd International Conference on Enactive Interfaces (Enactive /06) 245

Keyword Index

3D environment, 103 Biometric, 113


3D interaction, 85 Blind people, 91
Bodily space location, 191
A Body space enaction, 225
Bouncing-ball, 185
Acoustic, 173 Buildings, 79
Acoustic information, 91
Acoustic space, 93 C
Acting, 149
Action, 61, 171, 179 Catching, 237
Action vs. judging, 237 CAVE, 237
Activity, 149 Choreomediating, 225
Adaptive action, 195 Cognitive control, 61
Adaptive behaviour, 59 Cognitive travel assistance, 209
Affordance, 141, 207, 227, 239 Cognitive walking map, 209
Agents, 71 Collaborative learning, 85
Aging, 131, 171 Collaborative work, 85
Amplitude, 143 Collection, 83
Anticipation, 213 Collectivity, 161
Application, 75 Collision avoidance, 235
Arborigens, 65 Collision detection, 203
ARIMA, 163 Colocation, 217
Artificial intelligence, 53 Color visual code, 111
Asynchrony, 42 Compatibility effect, 119
Atnropology, 65 Complex system, 195, 199
Attention, 61, 183 Complexity, 53
Audio, 75 Computer, 155
Audio feedback, 133 Contact location, 229
Audio referents, 93 Continuous audio feedback, 123
Auditory feedback, 93 Control, 159
Auditory interface, 91 Coordination, 40, 131, 137
Auditory stimulation, 187 Coupling, 53, 233
Augmented reality, 111 Curve driving, 121
Augmenting human, 135
Automobile operation, 235 D
Autonomisation, 199
Autopoiesis, 165 Data interpretation, 71
Awareness cues, 161 Deformable objects, 57
Delay effects, 157
B Design criteria, 221
Diabetic, 129
Balance, 51 Direct perception, 37
Balance task, 93 Directional, 40
Behavioural interaction, 167 Discontinuity, 169
Believability, 177, 197, 201 Discrete-time systems, 157
Biofeedback, 129, 231 Discrimination threshold, 107
Biomechanics, 105 Displacement perception, 181
246
Distance perception, 103, 173 Eye movement, 143, 167
Distributed cognition, 193
Distributed systems, 81 F
Drag-and-drop, 115
Drawings, 65 Face validity, 185
Driver behaviour, 121 Feedback, 237
Driver models, 175 Feedback device, 129
Driver-vehicle systems, 147 Figural collection, 83
Driving, 175 Fingertip device, 229
Driving assistances, 187 Fitts' law, 123
Driving task, 181 Flexible object, 145
DWAF method, 169 Fluidity of motion, 177
Dyad, 63 Fmri, 42, 215
Dynamic coordination, 151 Foot unloading, 129
Dynamic systems theory, 139 Force feedback, 81
Dynamical approach, 30, 89, 137 Force fields, 81
Dynamical constraints, 89 Forward internal model, 127
Dynamical systems, 233 Fractal processes, 125
Dynamical timer, 169 Frame of reference, 103, 223
Dynamics, 40, 125, 163 Full-body control, 51
Future, 149
E
G
Ecological, 195
Ecological perception, 239 Game, 217
Ecological psychology, 91 Game design, 99
Eigenvector, 49 Gameplay, 99
Eletronic document, 69 Goals, 179
Embodied agent, 205 Graphics, 69
Embodied cognition, 59 Grasp movement, 105
Embodied cognition, 189, 195 Grip configurations, 227
Emotion, 205 Grip force, 131
Enaction, 53, 37, 77, 189, 193, 199, 239
Enactive, 63, 143
Enactive interaction, 30 H
Enactive interface, 101, 123, 147
Enactive perception, 195 Handedness, 73
Encountering haptic interface, 229 Haptic, 57, 75, 79, 81, 145, 217, 221
End-to-end latency, 185 Haptic interaction, 229
Epistemology, 77 Haptic interface, 85
Ergonomics, 175 Haptic perception, 95, 153
Event_based timer, 169 Haptic stimulation, 187
Exocentric direction, 103 Haptic systems, 157
Expectations, 201 Hardware, 155
Experience, 193 HCI, 123
Experimental analysis, 157 Head movement, 237
Experimental methods, 99 Hemiplegia, 231
Experimental study, 109 HMI, 71
Exploratory, 141 Human, 131
Exploratory movements, 183 Human action and perception, 223
Expression, 133, 155 Human computer interface, 207
Expressive cues, 177 Human factors, 235
3rd International Conference on Enactive Interfaces (Enactive /06) 247
Human locomotion, 127 M
Human movement study, 89
Human studies, 95 Manipulability, 221
Human-computer interface, 193 Manipulability ellipsoid, 221
Human-technology interactions, 59 Manipulators, 147
Manual skill, 139
I Market, 87
Mathematical theory, 135
Ideomotor views, 61 MEG, 215
Immersive, 217 Mental imagery, 189
Inertia, 173 Mental rehearsal, 163
Information capacity, 111 Mental simulation, 119
Information sampling, 59 Mimicking, 205
Information visualization, 239 Mixed realities, 197
Innovation, 87 Mobile group media, 161
Integration, 42 Mobile phone, 111
Integrative physiology, 135 Modeling, 125, 135
Intention, 179 Monitoring, 113
Interaction, 37, 44, 121, 149, 197, 203 Morphology, 99
Interaction style, 109 Motion capture, 203
Interaction technique, 115 Motor behavior, 30
Interactive art, 165, 225 Motor cognition, 191
Interface, 69, 111, 133 Motor control of perception, 189, 191
Intermodal, 173 Motor imagery, 127
Internal models, 131 Motor images, 163
Interpersonal coordination, 30, 89 Motor learning, 137, 145
Inverse kinematic, 177, 203 Motor planning, 223
Motor priming, 187
J Motor skills, 95
Movement, 141
Jacobian matrix, 221 Movement illusion, 215
Juggling robot, 151 Mri, 117
Multi-joint arm, 49
Multimodal, 40
K Multimodal feedback, 123
Multimodal interfaces, 87
Kinaesthesia, 49 Multiple displays, 115
Kinaesthetic, 225 Multisensory, 42
Kinematics, 227 Multisensory integration, 223
Kinesfield, 225 Multisensory perception, 153
Kinesthetic, 155 Muscle vibration, 127
Kinesthetic cues, 147
Knowledge, 77, 201
N
L Navigation, 79
Network, 113
Lane departure warning, 187 Neurosciences, 77
Laterality, 73 Noise, 40
Line drawing, 75 Nonverbal behavior, 205
Locomotion, 141 Numerosities, 153
Locomotor respiratory coupling, 233
248

O Q
Object command transition, 109 Qualitative analysis of gestures, 177
Object oriented, 53 Qualitative approach, 153
Obstacles, 159
Obstacle avoidance, 105 R
Oculomotor system, 183
Ontology, 83 Rational analysis, 59
Optic, 173 Reach movement, 105
Optimal control, 145 Reaching, 51, 119
Orientation difference threshold, 229 Reaching movement, 49
Reach-to-grasp, 159
P Reaction time, 207
Real-time feedback, 137
Pacinian, 117 Recognition of intentionnality, 37
Paintings, 65 Rehabilitation, 87, 231
Passive control, 151 Rhythm, 213
Perception, 61, 133, 143, 145, 165, 171, 207 Rigidity, 63
Perception and control, 49 Risk perception, 181
Perception prothetized, 101 Ritual, 65
Perceptive strategies, 101 Road departure, 181
Perceptual constraints, 89 Road-crossing, 171
Perceptual control of movement, 139, 191,
227 S
Perceptual crossing, 37
Perceptual-motor development, 139 Saccades, 167
Perspective, 69 Scheme, 213
Phantom, 79, 85 Scripting, 81
Phenomenological ontology, 189 See-through interface, 109
Phenomenology, 199 Segregation, 42
Philosophy, 179 Self-organization, 151
Piaget 83, 213 Sensor, 155
Pie menus, 109 Sensorimotor integration, 183
Plantar pressure feedback, 129 Sensory substitution, 73
Pointing, 73 Sensory substitution device, 95
Postural Activity, 63 Sensory-motor coordination, 151
Postural deficits, 231 Simulation, 125, 171, 235
Postural dynamics, 231 Sine circle map, 233
Posture, 93, 137, 143 Singularization, 44
Predictability, 44 Situated cognition, 193
Prehension, 227 Smart phone application, 161
Pre-motor theory of attention, 119 Smooth pursuit, 167
Presence, 197 Social interaction, 165
Proprioception, 93, 127, 215 Sound, 133
Prosthesis, 73 Space dual representation, 209
Prototype, 149 Spatial constraints, 159
Psychology, 179 Spatial forces distribution, 57
Psychophysics, 185, 223 Spatio-temporal coordination, 105
Psychophysics of multisensory-motor Spectators, 161
interaction, 207 Stability, 157
Steering, 121
3rd International Conference on Enactive Interfaces (Enactive /06) 249
Steering control, 175 Visumotor priming, 191
Structural coupling, 71 VR, 201
Support surface, 63
Suspension of disbelief, 201 W
Synchronization, 30, 213, 233
Synchronous forcefeedback, 57 Walking, 209
System implementation, 83 Wall size display, 115
Wearable, 113
T Web, 217
Wheelchair, 141
Tactile, 57 Whole body movement, 165
Tactile array, 107 Wireless, 113
Tactile perception, 107
Tactle sensitivity, 117 Z
Tao philosophy, 77
Task constraints, 139 Zoomable user interfaces, 239
Taxonomy, 99
Technical system design, 44
Tendon vibration, 215
Time series data, 71
Timers, 169
Timing control, 125
Touch, 117
Touch screen, 115
Touch simulating interface, 209
Transition, 167
Trend, 87
Two-third power law, 95

U
Uncontrolled manifold, 159
User, 69
User behaviour, 44
Variability, 163
Vehicular control, 147
Vibrotactile, 107
Vibrotactile threshold, 117
Virtual character, 177
Virtual environment, 91, 135, 185
Virtual humans, 197
Virtual mannequin, 51
Virtual prototyping, 51
Virtual reality, 153, 199, 203
Virtual space, 103
Virtual surface, 107
Vision, 175, 183
Visual impairment, 75
Visual information, 121
Visual perception, 181, 205, 235
Visual research, 119
Visually Impaired, 79
CHEMISE 31/10/06 12:45 Page 2

You might also like