Michael C. Corballis
University of Auckland
It is widely believed that language emerged in our species within the last 100,000 years. This “big bang” theory of language evolution makes little evolutionary sense. I argue instead that language was built on the primate mirror system, a brain circuit specialized initially for manual grasping. In bipedal hominins, it expanded into pantomime, with progressive movement from hand to face, and ultimately to vocal gestures. Evidence from this scenario comes from gestural communication in primates, language-like gestures in apes, overlapping lateralized systems for speech and gesture in humans, and sign languages. The cultural big bang of the past 100,000 years may stem from the emergence of speech as the dominant mode, and not from language itself.
Jana M. Iverson
University of Pittsburgh
This talk presents findings from three lines of research designed to address the broader issue of the role of gesture in communicative development. The first focuses on gesture as a predictor of developing language in typically- and atypically-developing toddlers. The second explores the use of gestures by parents when interacting with typically- vs. atypically-developing toddlers (“gestural motherese”) and asks whether parent gesture is modified in relation to the child’s developmental level. The third documents enhanced variability in early gesture use in infants at heightened biological risk for autism spectrum disorder and explores the consequences of this variability for the language-learning environment.
Hand gestures are a ubiquitous part of human communication. In this talk, Dr. Kelly will discuss the role that these gestures play with speech during the process of language understanding. Specifically, he will draw from multiple fields—developmental psychology, cognitive neuroscience and second language (L2) learning—using both behavioral and brain approaches to show that gesture and speech are integrated to differing extents on different levels of language. In so doing, he hopes to ultimately provide insights into our understanding of what language is. He will conclude by discussing the possibility that the human body is not just a communicator of language, but potentially also a fundamental part of it.
Gesture research has a long tradition grounded in empirical and theoretical investigation. The systematic use of computational modeling to understand a behavioral or cognitive phenomenon, however, as spearheaded by Artificial Intelligence and Cognitive Science, has not been applied to the study of co-occurring speech and gesture and its underlying cognitive mechanisms. I will argue that computational methods actually do offer attractive additional means of investigating gesture: By devising cognitive models and implementing simulations of speech and gesture behavior, theoretical models are put to the test to make detailed and verifiable predictions. By grounding these models in empirical data, new tools for annotation and analysis are created that answer detailed questions to the data. Finally, by employing simulators for controlled experiments, the effects and constituents of multimodal behavior can be studied when transferred to non-human communicators. I will present work along all of these lines. The results I will report demonstrate how computational gesture studies begin to unfold their potential for gesture researchers, by providing them with new kinds of questions, tools, and answers.
University of Basel
This talk focuses on a phenomenon largely studied by the gesture studies literature: deixis. The aim of the paper is to take deixis as a paradigmatic case of a phenomenon that is achieved not only by mobilizing language and gesture, but which involves the entire body, within complex multimodal Gestalts which are progressively and emergently build in social interaction. Deixis will be taken as a starting point to investigate the way in which complex multi-layered dimensions intervene in the achievement of reference as it is naturally organized by participants in their routine practice. In order to refer to objects in the environment, participants in interaction mobilize talk, gesture, gaze, body postures, as well as their entire mobile bodies. The paper deals with the timely coordination of these embodied dimensions, within the action of the speaker as well as, more globally, within social interaction. It shows that the observation of naturally occurring social activities can reveal systematic sequential practices involving the emergent composition of multimodal Gestalts. Empirical examples will be drawn from corpora of video recorded interactions in a diversity of social settings, in ordinary conversations as well as in workplace interactions.
University of Haifa
The relation between sign language and gesture is not clearly understood. Some take the view that sign languages are just like spoken languages, distinguished only trivially by the medium of production, while others hold that sign languages are derived directly from gesture, although different from co-speech gesture in crucial ways. The work presented here suggests that we are confronted with this puzzling dichotomy because we have often been looking in the wrong places in our quest to understand the relation between sign language and gesture.
I begin by isolating gestures of different parts of the body that are designated to manifest grammatical structure in established sign languages. Turning to a very young sign language in a Bedouin village, I will show that the body begins as a nondesignated whole, with only the hands designated to create images. Across four age groups, we will see, not a magical and sudden appearance of grammatical structure, but instead a gradual activation of different parts of the body, to create increasingly complex grammatical form. Through this process, the designated gestures of visual language illuminate the emergence of grammar in a way that could not be observed in a newly emerging spoken language, even if it were possible to encounter one.
A number of photo albums were added to the new photos page. Thanks to the contributors!
The ISGS 5 conference is now over and we would like to thank everyone involved for their contributions! If you have photos from the event that you would like us to put up at the web site, please send them to us. If you presented a poster at the conference and want us to put this up on the website, please send that too. The posters will be put up on a password protected page — to avoid concerns with not yet published results — and the password will be sent out to all conference participants.
On-site registration opens at 8 AM in the morning tomorrow (Tuesday).
The welcome reception will NOT take place in the university building, as previously stated. See the Social events page for details.
Follow ISGS 5 on twitter.
Online registration will close on Sunday the 22:th of July. After that, all registration is handled at the conference venue.
The Book of Abstracts is now available! Information on the social events has been added on a separate page. The Conference Schedule is updated (very slightly). More exact information about the location of the conference venue has been added to the information page.
Version 3 of the Preliminary Conference Program is now available.
The Preliminary Conference Program is now available.
About posters: The boards that will be used for holding the posters on ISGS5 are 90 centimeters wide and 190 centimeters high. An email with more info will be sent out to all Poster presenters.
Notifications of acceptance or rejection of submitted abstracts have been sent out by email. It is now possible to register for the conference.
The deadline has now passed and the reviewing process has started.
Deadline for submission of abstracts and panel proposals is extended to February 13!
It is now possible to submit abstracts and panel proposals.
Scientific committee added.
Call for papers sent out and web site launched.