Adaptation aftereffects reveal representations for encoding of contingent social actions
Talk10:30 AM - 12:00 Noon (UTC) 2020/03/23 10:30:00 UTC - 2020/03/23 12:00:00 UTC
Why is it so easy for humans to interact with each other? In social interactions, humans coordinate their actions with each other nonverbally. For example, dance partners need to relate their actions to each other to coordinate their movements. The underlying neurocognitive mechanisms supporting this ability are surprisingly poorly understood. Here we use a behavioral adaptation paradigm to examine the functional properties of neural processes encoding social interactions. We show that neural processes exist that are sensitive to pairs of matching actions that make up a social interaction. Specifically, we demonstrate that social interaction encoding processes exhibit sensitivity to a primary action (e.g. "throwing") and importantly to a matching contingent action (e.g., “catching”). Control experiments demonstrate that the sensitivity of action recognition processes to contingent actions cannot be explained by lower-level visual features or amodal semantic adaptation. Moreover, we show that action recognition processes are sensitive only to contingent actions, but not to noncontingent actions, demonstrating their selective sensitivity to contingent actions. The findings show the selective coding mechanism for action contingencies by action-sensitive processes and demonstrate how the representations of individual actions in social interactions can be linked in a unified representation. These findings provide insights into the perceptual architecture that helps humans to relate actions to each other and are in contrast to the common view that action-sensitive units are sensitive to one action only.
The cognitive architecture of action representations
Talk10:30 AM - 12:00 Noon (UTC) 2020/03/23 10:30:00 UTC - 2020/03/23 12:00:00 UTC
Understanding other people’s actions is essential for our survival – we need to be able to process others’ goals and to behave accordingly. Although for most of us this ability comes with effortless ease, the underlying processes are far from being understood. In the current study, we aimed to examine (a) the representational space of actions in terms of features and categories, (b) the dimensions that underlie this structure, and (c) what makes some actions more similar to others. To address these questions, we used three rating studies as well as inverse multidimensional scaling in combination with hierarchical clustering. We found that the structure of actions can be divided into twelve general categories, for instance sport, daily routines or food-related actions. Moreover, we found that the feature-based structure underlying action representations can be mapped on eleven dimensions, such as involved body parts, change of location and contact with others. Additionally, we observed that the categorical structure of actions could be best explained by four out of the eleven dimensions: object-directedness, posture, pace and use of force. Results from these studies unveil the possible categorical and feature-based structures underlying action representation and additionally show what information is important to distinguish between different actions and to assign meaning to them.
Talk10:30 AM - 12:00 Noon (UTC) 2020/03/23 10:30:00 UTC - 2020/03/23 12:00:00 UTC
Actions can be described at different hierarchical levels, ranging from very broad (e.g., doing sports) to very specific ones (e.g., breaststroke). Here we aimed to determine the characteristics of actions at different levels of abstraction. Following up the literature on object classification (e.g. Rosch et al., 1976), we carried out several behavioral studies in which we presented action words at the superordinate (e.g., locomotion), basic (e.g., swimming) and subordinate level (e.g., breaststroke). We instructed participants to list features of actions (e.g., ‘arm’, ‘rotating’, ‘water’, ‘diving’ for the action ‘swimming’) and measured the number of features that were provided by at least six out of twenty participants (‘common features’), separately for the three different levels. Specifically, we determined the number of shared (i.e. provided for more than one category) and the number of unique (i.e. provided for one category only) features. We found that participants produced the highest number of common features for actions at the basic level. Participants described actions at the superordinate level with more unique features compared to those provided at the basic level. At the same time, actions at the subordinate level shared most features with actions from different categories from the same (subordinate) level. Our results suggest that the basic level, for which the information of action categories is maximized, plays a central role in categorization, in line with the corresponding literature on object categorization.
The predictive impact of contextual cues during action observation
Talk10:30 AM - 12:00 Noon (UTC) 2020/03/23 10:30:00 UTC - 2020/03/23 12:00:00 UTC
In everyday life, we frequently observe other people acting and easily infer their goals. A large body of research has focused on the question of how we capture, simply by observation, what others aim to do. Barely surprising, action recognition strongly relies on the analysis of the actor’s body movements and the recognition of manipulated objects. However, an increasing number of studies show that action observers also exploit contextual information, such as the environment in which the action takes place, actor-related cues, and unused objects nearby the action, i.e., contextual objects (CO). With regard to the latter, we tested the assumption that the brain's engagement in processing COs is not just driven by the COs’ semantic congruency to the observed action, but rather to the CO's potential to inform expectations towards upcoming action steps. Based on previous findings, our neuroanatomical hypotheses particularly focused on the inferior frontal gyrus (IFG). Our results revealed the IFG to reflect the informational impact of COs on an observed action at several circumstances: either when the CO depicted a strong match so that the currently operating predictive model of the observed action could be updated and specified towards a particular outcome; or when the CO revealed a strong conflict with the observed manipulation, in which case the currently operating predictive model had to be reconsidered and possibly extended towards a new overarching action goal. Our findings support the view that when observing an action, the brain is particularly tuned to highly informative context.
Decoding the meaning of actions across vision and language
Talk10:30 AM - 12:00 Noon (UTC) 2020/03/23 10:30:00 UTC - 2020/03/23 12:00:00 UTC
How is knowledge about actions like opening, giving, or greeting represented in the brain? A key aspect of this question focuses on the neural distinction between the representation of specific perceptual details (e.g. the movement of a body part) and more general, conceptual aspects (e.g. that a movement is meant to give something to someone). Critically, the former representation is tied to a specific modality, whereas the latter is generally accessible via different modalities, e.g. via observation or language. A popular view is that perceptual action details are encoded in occipital and temporal cortex, whereas conceptual aspects are encoded in frontoparietal cortex, potentially overlapping with the motor system. Using fMRI-based crossmodal MVPA, we provide evidence that favors an alternative view: Action representations in left lateral posterior temporal cortex (LPTC) generalize across action videos and sentences, indicating that they can be accessed both via observation and language. Moreover, multiple regression RSA demonstrated that these modality-general representations are organized following semantic principles, which further corroborates the interpretation that LPTC represents actions at a conceptual level. By contrast, frontoparietal areas revealed functionally distinct representations for the different modalities, suggesting that they represent modality-specific details, and, more generally, challenging the widely-held assumption that overlap in brain activity indicates the recruitment of a common representation or function.