Keynotes

Keynotes

Neurocognitive Mechanisms of Contextual Adjustments in Cognitive Control

Tobias Egner

Department of Psychology & Neuroscience, and Center for Cognitive Neuroscience, Duke University

When routine behavior runs into trouble, "cognitive control" processes are recruited to bring information processing in line with current demands. For instance, encountering an almost-accident on our commute will reinforce our attentional focus on the traffic and away from the radio. How does the brain accomplish this? In this talk, I will present behavioral, neuroimaging, and neuro-stimulation data that delineate the cognitive and neural mechanisms underlying our ability to adapt to changing task demands. Specifically, I will present a "control learning" perspective that views cognitive control as being guided by learning and memory mechanisms, exploiting statistical regularities in our environment to anticipate the need for control. Control learning not only adapts attentional sets to changing demands over time, but it can also directly associate appropriate top-down attentional sets with specific bottom-up cues. This type of learning holds the promise of combining the speed of automatic processing with the flexibility of controlled processing, and could form the basis of novel interventions in clinical conditions that involve impaired cognitive control.  


Social signalling as a framework for understanding human non-verbal behaviour

Antonia Hamilton

Institute of Cognitive Neuroscience, University College London

Face to face social interactions between two people involve a rich exchange of verbal and non-verbal signals, but the cognitive and neural mechanisms supporting dynamic interactions remain poorly understood. This talk will use a social signalling framework to make sense of one particularly social behaviour – imitation – which has been described as a 'social glue' that causes affiliation and liking. However, it is not clear what cognitive and brain mechanisms could link imitation to affiliation. By placing the 'social glue' hypothesis within a signalling framework, it is possible to make specific testable predictions for how and why we imitate. First, to act as social glue, imitation should be produced when another person is watching and can receive the imitation signal.  Second, the person watching should change their evaluation of the imitator.  I will describe a series of studies which test the first of these predictions in detail, using a behavioural and neuroimaging methods with infants, children, typical adults and adults with autism spectrum condition. The results converge in showing that being watched increases the tendency to imitate, and supports the interpretation of imitation as a signalling behaviour.  

Building on this, the second part of this talk describes the new methods available to explore social signalling behaviour in live interactions. Using detailed motion capture together with wavelets analysis, we can track and quantify precise patterns of natural mimicry behaviour and other social cues in two person conversation. Using functional near-infrared spectroscopy, we can record neural signatures of imitating and being imitated while freely-moving participants are engaged in naturalistic tasks. These new approaches can give deeper insights into the details of social behaviour and allow us to define the neural mechanisms of dynamic social interactions. Applying these methods and interpreting them within the context of a social signalling framework shows how we can turn the idea of 'second person neuroscience' into a concrete reality.


Ecological Language: a multimodal approach to language learning and processing

Gabriella Vigliocco

Institute of Cognitive Neuroscience, University College London

The human brain has evolved the ability to support communication in complex and dynamic environments. In such environments, language is learned, and mostly used in face-to-face contexts in which processing and learning is based on multiple cues both linguistic and non-linguistic. Yet, our understanding of how language is learnt and processed comes for the most from reductionist approaches in which the multimodal signal is reduced to speech or text. I will introduce our current programme of research that investigates language in real-world settings in which learning and processing are intertwined and the listener/learner has access to – and therefore can take advantage of – the multiple cues provided by the speaker. I will then describe studies that aim at characterising the distribution of the multimodal cues in the language used by caregivers when interacting with their children (mostly 2-3 years old) and provide data concerning how these cues are differentially distributed depending upon whether the child knows the objects being talked about (allowing us to more clearly isolate learning episodes), and whether the objects are present (ostensive vs. non-ostensive). I will then move to a study using EEG addressing the question of how discourse but crucially also the non-linguistic cues modulate predictions about the next word in a sentence. I will conclude discussing the insights we have and (especially) can gain using this real world, more ecologically valid, approach to the study of language.


3077 hits