Saccadic eye movements do not trigger a joint Simon effect
Talk08:30 AM - 10:00 AM (UTC) 2020/03/23 08:30:00 UTC - 2020/03/23 10:00:00 UTC
Although the joint Simon task (JST) has been investigated for more than a decade, its cause is still widely debated. According to ideomotor views of action control, action effects are a commonly cited explanation. However, action effects are usually confounded with the actions producing such effects. We combined a JST with eye tracking and asked participants to respond by performing specific saccades. Saccades were followed by visual feedback (central vs. lateral feedback), serving as the action effect. This arrangement allowed us to isolate actions from action effects and, also to prevent each actor from seeing the reciprocal actions of the other actor. In this saccadic JST, we found a significant compatibility effect in the individual setting. The typical enhanced compatibility effect in the joint setting of the JST was absent with central action feedback and even when lateralized visual action feedback was provided. Our findings suggest that the perception of action effects alone might not be sufficient to modulate compatibility effects for eye movements. The presence of a compatibility effect in the individual setting shows the specific requirements of a saccadic compatibility task – the requirement to perform prosaccades to compatible and antisaccades to incompatible target locations. The lack of a difference between compatibility effects in joint and individual settings and the lack of a modulation of the compatibility effect through lateralized visual action feedback shows that the finding of a joint Simon effect that has frequently been reported for manual responses is absent for saccadic responses.
That will Catch my Attention!: Is the salience of an action´s future effect anticipated and proactively monitored?
Talk08:30 AM - 10:00 AM (UTC) 2020/03/23 08:30:00 UTC - 2020/03/23 10:00:00 UTC
When our actions contingently yield distal consequences, we bi-directionally associate action and effect. These action-effect associations allow us to select and plan our actions by anticipating their consequences. Crucially, effect anticipation also leads to anticipatory saccades towards the future location at which we expect an action´s effect to occur based on prior learning experiences. These anticipatory saccades are thought to reflect a proactive monitoring process that prepares a later comparison of expected and actual effect. Here, we examined how features of the anticipated future effect are proactively monitored based on the example of effect salience (luminance, colour contrast, and flicker frequency). Participants performed anticipatory saccades earlier for anticipated salient as compared to anticipated non-salient future effects of their actions. To draw inferences regarding the relation between the perception and anticipation of the same effect stimulus, we relate our findings regarding anticipatory saccades in anticipation of a future effect to exogenous and endogenous (antisaccade task) shifts of attention when presented with the same stimulus.
Perceived Contingency Affects ERP Amplitude Reductions for Self-Generated Sounds
Talk08:30 AM - 10:00 AM (UTC) 2020/03/23 08:30:00 UTC - 2020/03/23 10:00:00 UTC
The processing of self-initiated stimuli has been associated with attenuated sensory intensity and neural activity. In the auditory domain, the ERP components N1 and P2 are reduced for self- compared to externally-generated sounds. One interpretation suggests that the attenuation of both components reflects a matching of action-based sensory predictions and sensory reafferences, which helps to identify stimuli as self-generated, thus contributing to the sense of agency. Agency has been found to vary as a function of the perceived contingency between actions and action outcomes. To explore the effect of perceived contingencies on the N1 and P2 for self-generated tones, participants first engaged in a paradigm where they tried to elicit a desired tone with button presses. The perceived contingency was manipulated by changing the outcome probability. Following high and low outcome probability versions of this paradigm, participants performed a task in which ERPs were recorded for self- and externally-generated sounds. A mixed linear effects analysis including individual control ratings as predictors revealed that N1 amplitudes were unaffected by the contingency manipulation, while the P2 amplitude for self-generated sounds was significantly lower in the high- compared to the low-probability condition, but only for participants with large differences in their control ratings. This study adds further evidence to reports on a dissociation of the N1 and P2 components, with the P2 reduction reflecting top down processes like the perceived control, and the N1 reduction reflecting bottom-up processes that depend on action-related information.
Spatial action-effect binding depends on type of action-effect transformation
Talk08:30 AM - 10:00 AM (UTC) 2020/03/23 08:30:00 UTC - 2020/03/23 10:00:00 UTC
Spatial action-effect binding denotes the mutual attraction between the perceived position of an effector (e.g., one’s own hand) and a distal object that is controlled by this effector. Such spatial binding can be construed as an implicit measure of the inclusion of the controlled object into the agent’s body representation. In two experiments, we investigated how different transformations of hand movements into movements of a visual object affect spatial action-effect binding. In Experiment 1, we found a significantly lower drift of the proprioceptive position of the hand towards the visual object when hand movements were transformed into inverted cursor movements rather than cursor movements in the same direction while the actual physical distance between hand and object was held constant. Experiment 2 showed that this reduction reflected a complete elimination of spatial binding in the inverted condition. The results will be discussed against the idea that conflicting sensory inputs lead to the suppression of those input channels that are less relevant for the current task. Furthermore, they broaden our understanding of the prerequisites for an experience of ownership over artificial, non-corporeal objects by showing that direct control over how an object moves is not a sufficient condition for a sense of ownership because proprioceptive drift can be fully abolished even under conditions of full controllability.