Loading Session...

Computational modeling in cognitive psychology

Session Information

Mar 23, 2020 04:00 PM - 05:30 PM(UTC)
Venue : HS 5
20200323T1600 20200323T1730 UTC Computational modeling in cognitive psychology HS 5 TeaP 2020 in Jena, Germany teap2020@uni-jena.de

Presentations

How information integration costs shape strategy selection in decision making: A Bayesian multimethod approach

Talk 04:00 PM - 05:30 PM (UTC) 2020/03/23 16:00:00 UTC - 2020/03/23 17:30:00 UTC
Decision makers have at their disposal both compensatory strategies, which integrate across all available attributes of an option, and noncompensatory strategies, which consider only part of the attributes. Compensatory and noncompensatory strategies make different prediction both for the resulting decisions and patterns in response time. I present a Bayesian latent mixture approach that seamlessly combines decision and response-time data to infer a person’s strategy use. The approach also allows to compare different assumptions about the information processing in compensatory strategies (e.g., regarding how the amount of available evidence influences response error and response times), taking into account the model complexity inherent in the assumptions. I apply the approach to examine the influence of the cognitive costs of integrating attribute information on strategy selection in decision from givens (where all attribute information is openly provided). Participants were asked to decide between two alternatives and both the number of attributes shown for each alternative and how attribute information was coded were manipulated. The results show that participants predominantly selected a noncompensatory strategy when the number of attributes was high and the attribute coding scheme varied across attributes; otherwise, they mainly relied on a compensatory strategy. I suggest that the pattern of strategy selection reflects an adaptive response to the costs of information integration, a previously neglected factor for strategy selection. The findings suggest an explanation for a puzzling inconsistency in previous studies on strategy selection in decision from givens; they also reveal boundary conditions of automatic compensatory processing in decision making.
Presenters
TP
Thorsten Pachur
Max Planck Institute For Human Development

Effects of frustration of the achievement motive on task processing: Findings from diffusion model studies

Talk 04:00 PM - 05:30 PM (UTC) 2020/03/23 16:00:00 UTC - 2020/03/23 17:30:00 UTC
In motive research, the analysis of experimental data by means of mathematical models like the diffusion model is not yet a common approach. Based on the results of two studies (N1 = 108, N2 = 104), I demonstrate that the diffusion model (Ratcliff, 1978) is a useful tool to gain more insights into motivational processes. The experiments were inspired by findings of a study by Brunstein and Hoyer (2002). They observed that individuals high in the implicit achievement motive who receive negative intraindividual performance feedback speed up in a response time task. The reduced mean response times were interpreted in terms of an increase in effort. In the two studies, in which I used a similar feedback manipulation, individuals with high implicit achievement motive decreased their threshold separation parameter. Thus, they became less cautious over the time working on the task. Accordingly, the decrease in response times previously reported might mainly be attributable to a change in strategy (focusing on speed instead of accuracy) rather than to an increase in effort. The results will be discussed in the context of emotion regulation strategies.
Presenters
VL
Veronika Lerche
Universität Heidelberg

A comparison of conflict diffusion models in the flanker task through pseudo-likelihood Bayes factors

Talk 04:00 PM - 05:30 PM (UTC) 2020/03/23 16:00:00 UTC - 2020/03/23 17:30:00 UTC
Conflict tasks are one of the most widely studied paradigms within cognitive psychology, where participants are required to respond based on relevant sources of information while ignoring conflicting irrelevant sources of information. The flanker task has been the focus of considerable modeling efforts, with only three models being able to provide a complete account of empirical choice response time distributions: the dual-stage two-phase model (DSTP), the shrinking spotlight model (SSP), and the diffusion model for conflict tasks (DMC). Although these models are grounded in different theoretical frameworks, can provide diverging measures of cognitive control, and are quantitatively distinguishable, no previous study has compared all three of these models in their ability to account for empirical data. Here, we perform a comparison of the precise quantitative predictions of these models through Bayes factors, using probability density approximation to generate a pseudo-likelihood estimate of the unknown probability density function, and thermodynamic integration via differential evolution to approximate the analytically intractable Bayes factors. We find that for every participant across three data sets, DMC provides an inferior account of the data to DSTP and SSP, which has important theoretical implications regarding cognitive processes engaged in the flanker task, and practical implications for applying the models to flanker data. More generally, we argue that our combination of probability density approximation with marginal likelihood approximation provides a crucial step forward for the future of model comparison, where Bayes factors can be calculated between any models that can be simulated.
Presenters
NE
Nathan Evans
University Of Amsterdam
Co-Authors
MS
Mathieu Servant

Model selection by cross validation for computational models of visual working memory

Talk 04:00 PM - 05:30 PM (UTC) 2020/03/23 16:00:00 UTC - 2020/03/23 17:30:00 UTC
In a typical visual working memory task, the participant is shown a display containing one or multiple items, followed by a delay, followed by a response screen on which the participant is asked to provide the remembered feature value at a marked location in a near-continuous response space. Theories of the limitations of visual working memory have given rise to numerous computational models to account for the distribution of participants' errors in these tasks. Classes of models differ in the assumptions they make about the nature of memory precision. For example, variable precision models assume that memory precision varies across items and trials even when the number of items across trials is fixed, while fixed precision models assume that precision is invariant. In model comparisons, variable precision models tend to provide the better fit of the model to the observed data, as measured by information criteria. In this project, we explore how well these models fare when the models are judged not just by how well they predict the observed data, but also by how well they predict unseen data. We use cross validation approaches to test the out-of-sample predictive ability of these visual working memory models. Thus, rather than focusing on models' ability to merely fit the observed data, we propose to take the generalizability of a model into account when selecting between computational models.
Presenters
NL
Nicholas Lange
University Of Warwick
Co-Authors
HS
Henrik Singmann
University Of Warwick

A Strong Test of Empirical Validity for Cognitive Process Models.

Talk 04:00 PM - 05:30 PM (UTC) 2020/03/23 16:00:00 UTC - 2020/03/23 17:30:00 UTC
Cognitive process models are a popular tool to examine the underlying processing structure of cognitive phenomena. When interpreting these parameters it is useful to consider individual differences. Previously, these models were either applied on the individual level, and individual differences were assessed via classification of parameter values, or they were applied on the aggregate level, and individual differences were ignored. With the introduction of Bayesian statistics in the field, hierarchical process models are developed as a tool to draw inference on the aggregate level while still accounting for individual variability. Usually, individual parameter estimates are ignored and only distribution parameters are interpreted. Yet, this practice does not account for severe, qualitative individual differences. For example, an experimental manipulation may affect all individuals' guessing parameter in the same direction, or it may affect guessing differently for different participants. This assessment is crucial when assessing empirical validity of process models. In this talk, we show how a strict test of selective influence can be performed using ordinal constraints on individuals’ parameters. This assessment is a general tool for a wide range of cognitive process models.
Presenters
JH
Julia Haaf
University Of Amsterdam
Co-Authors
NE
Nathan Evans
University Of Amsterdam
FA
Frederik Aust
131 visits

Session Participants

User Online
Session speakers, moderators & attendees
No speaker for this session!
University of Warwick
Max Planck Institute for Human Development
No attendee has checked-in to this session!
19 attendees saved this session

Session Chat

Live Chat
Chat with participants attending this session

Questions & Answers

Answered
Submit questions for the presenters
No speaker for this session!

Session Polls

Active
Participate in live polls

Need Help?

Technical Issues?

If you're experiencing playback problems, try adjusting the quality or refreshing the page.

Questions for Speakers?

Use the Q&A tab to submit questions that may be addressed in follow-up sessions.