Loading Session...

Cognitive and Neural Processes During Action Observation

Session Information

Mar 23, 2020 10:30 AM - 12:00 Noon(UTC)
Venue : SR 113
20200323T1030 20200323T1200 UTC Cognitive and Neural Processes During Action Observation SR 113 TeaP 2020 in Jena, Germany teap2020@uni-jena.de

Presentations

Adaptation aftereffects reveal representations for encoding of contingent social actions

Talk 10:30 AM - 12:00 Noon (UTC) 2020/03/23 10:30:00 UTC - 2020/03/23 12:00:00 UTC
Why is it so easy for humans to interact with each other? In social interactions, humans coordinate their actions with each other nonverbally. For example, dance partners need to relate their actions to each other to coordinate their movements. The underlying neurocognitive mechanisms supporting this ability are surprisingly poorly understood. Here we use a behavioral adaptation paradigm to examine the functional properties of neural processes encoding social interactions. We show that neural processes exist that are sensitive to pairs of matching actions that make up a social interaction. Specifically, we demonstrate that social interaction encoding processes exhibit sensitivity to a primary action (e.g. "throwing") and importantly to a matching contingent action (e.g., “catching”). Control experiments demonstrate that the sensitivity of action recognition processes to contingent actions cannot be explained by lower-level visual features or amodal semantic adaptation. Moreover, we show that action recognition processes are sensitive only to contingent actions, but not to noncontingent actions, demonstrating their selective sensitivity to contingent actions. The findings show the selective coding mechanism for action contingencies by action-sensitive processes and demonstrate how the representations of individual actions in social interactions can be linked in a unified representation. These findings provide insights into the perceptual architecture that helps humans to relate actions to each other and are in contrast to the common view that action-sensitive units are sensitive to one action only.
Presenters
SD
Stephan De La Rosa
FOM University Of Applied Sciences

The cognitive architecture of action representations

Talk 10:30 AM - 12:00 Noon (UTC) 2020/03/23 10:30:00 UTC - 2020/03/23 12:00:00 UTC
Understanding other people’s actions is essential for our survival – we need to be able to process others’ goals and to behave accordingly. Although for most of us this ability comes with effortless ease, the underlying processes are far from being understood. In the current study, we aimed to examine (a) the representational space of actions in terms of features and categories, (b) the dimensions that underlie this structure, and (c) what makes some actions more similar to others. To address these questions, we used three rating studies as well as inverse multidimensional scaling in combination with hierarchical clustering. We found that the structure of actions can be divided into twelve general categories, for instance sport, daily routines or food-related actions. Moreover, we found that the feature-based structure underlying action representations can be mapped on eleven dimensions, such as involved body parts, change of location and contact with others. Additionally, we observed that the categorical structure of actions could be best explained by four out of the eleven dimensions: object-directedness, posture, pace and use of force. Results from these studies unveil the possible categorical and feature-based structures underlying action representation and additionally show what information is important to distinguish between different actions and to assign meaning to them.
Presenters
ZK
Zuzanna Kabulska
Regensburg University
Co-Authors
AL
Angelika Lingnau
Universität Regensburg

Hierarchical organization of actions

Talk 10:30 AM - 12:00 Noon (UTC) 2020/03/23 10:30:00 UTC - 2020/03/23 12:00:00 UTC
Actions can be described at different hierarchical levels, ranging from very broad (e.g., doing sports) to very specific ones (e.g., breaststroke). Here we aimed to determine the characteristics of actions at different levels of abstraction. Following up the literature on object classification (e.g. Rosch et al., 1976), we carried out several behavioral studies in which we presented action words at the superordinate (e.g., locomotion), basic (e.g., swimming) and subordinate level (e.g., breaststroke). We instructed participants to list features of actions (e.g., ‘arm’, ‘rotating’, ‘water’, ‘diving’ for the action ‘swimming’) and measured the number of features that were provided by at least six out of twenty participants (‘common features’), separately for the three different levels. Specifically, we determined the number of shared (i.e. provided for more than one category) and the number of unique (i.e. provided for one category only) features. We found that participants produced the highest number of common features for actions at the basic level. Participants described actions at the superordinate level with more unique features compared to those provided at the basic level. At the same time, actions at the subordinate level shared most features with actions from different categories from the same (subordinate) level. Our results suggest that the basic level, for which the information of action categories is maximized, plays a central role in categorization, in line with the corresponding literature on object categorization.
Presenters
TZ
Tonghe Zhuang
Universität Regensburg
Co-Authors
AL
Angelika Lingnau
Universität Regensburg

The predictive impact of contextual cues during action observation

Talk 10:30 AM - 12:00 Noon (UTC) 2020/03/23 10:30:00 UTC - 2020/03/23 12:00:00 UTC
In everyday life, we frequently observe other people acting and easily infer their goals. A large body of research has focused on the question of how we capture, simply by observation, what others aim to do. Barely surprising, action recognition strongly relies on the analysis of the actor’s body movements and the recognition of manipulated objects. However, an increasing number of studies show that action observers also exploit contextual information, such as the environment in which the action takes place, actor-related cues, and unused objects nearby the action, i.e., contextual objects (CO). With regard to the latter, we tested the assumption that the brain's engagement in processing COs is not just driven by the COs’ semantic congruency to the observed action, but rather to the CO's potential to inform expectations towards upcoming action steps. Based on previous findings, our neuroanatomical hypotheses particularly focused on the inferior frontal gyrus (IFG). Our results revealed the IFG to reflect the informational impact of COs on an observed action at several circumstances: either when the CO depicted a strong match so that the currently operating predictive model of the observed action could be updated and specified towards a particular outcome; or when the CO revealed a strong conflict with the observed manipulation, in which case the currently operating predictive model had to be reconsidered and possibly extended towards a new overarching action goal. Our findings support the view that when observing an action, the brain is particularly tuned to highly informative context.
Presenters
NE
Nadiya El-Sourani
University Of Münster
Co-Authors
RS
Ricarda Schubotz
University Of Münster

Decoding the meaning of actions across vision and language

Talk 10:30 AM - 12:00 Noon (UTC) 2020/03/23 10:30:00 UTC - 2020/03/23 12:00:00 UTC
How is knowledge about actions like opening, giving, or greeting represented in the brain? A key aspect of this question focuses on the neural distinction between the representation of specific perceptual details (e.g. the movement of a body part) and more general, conceptual aspects (e.g. that a movement is meant to give something to someone). Critically, the former representation is tied to a specific modality, whereas the latter is generally accessible via different modalities, e.g. via observation or language. A popular view is that perceptual action details are encoded in occipital and temporal cortex, whereas conceptual aspects are encoded in frontoparietal cortex, potentially overlapping with the motor system. Using fMRI-based crossmodal MVPA, we provide evidence that favors an alternative view: Action representations in left lateral posterior temporal cortex (LPTC) generalize across action videos and sentences, indicating that they can be accessed both via observation and language. Moreover, multiple regression RSA demonstrated that these modality-general representations are organized following semantic principles, which further corroborates the interpretation that LPTC represents actions at a conceptual level. By contrast, frontoparietal areas revealed functionally distinct representations for the different modalities, suggesting that they represent modality-specific details, and, more generally, challenging the widely-held assumption that overlap in brain activity indicates the recruitment of a common representation or function.
Presenters
MW
Moritz Wurm
CIMeC, University Of Trento
Co-Authors
AC
Alfonso Caramazza
191 visits

Session Participants

User Online
Session speakers, moderators & attendees
No speaker for this session!
Universität Regensburg
No attendee has checked-in to this session!
8 attendees saved this session

Session Chat

Live Chat
Chat with participants attending this session

Questions & Answers

Answered
Submit questions for the presenters
No speaker for this session!

Session Polls

Active
Participate in live polls

Need Help?

Technical Issues?

If you're experiencing playback problems, try adjusting the quality or refreshing the page.

Questions for Speakers?

Use the Q&A tab to submit questions that may be addressed in follow-up sessions.