
Understanding Bayesian Cognitive Modelling
Explore the concepts of Bayesian cognitive modelling, including building normative models, generating predictions, and comparing them with actual data. Discover functions amenable to Bayesian analysis and computational methods such as MCMC sampling and variational Bayesian approximation.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Modelling? http://doingbayesiandataanalysis.blogspot.co.il/2011/10/baye sian-models-of-mind-psychometric.html
Modelling? http://doingbayesiandataanalysis.blogspot.co.il/2011/10/baye sian-models-of-mind-psychometric.html
The Bayesian brain hypothesis 1. Take a cognitive task or a more general cognitive or social function 2. Build a normative model describing how would an optimal Bayesian mechanism perform in this task 3. Generate predictions, and compare the normative model to the actual data
What functions can be Bayesian? Sequential updating (e.g. learning tasks) The need to inverse generative models (i.e. to infer from data to its causes e.g. perception) Integrating information under uncertainty (e.g. cue integration) Functions involving predictions and prediction errors
Today The normal/normal model (known variance) The simple Kalman filter model The normal/normal-inverse gamma model (unknown variance)
Computational basis We talked about MCMC sampling for Bayesian data analysis Bayesian-brain type models usually apply something called conjugate priors The classical (pre-computers) approach for Bayesian inference Mathematical closed-form solution For some likelihood functions, there are prior distributions that produce known-form posterior distributions, with algebraically derived parameters E.g. For a normal likelihood, using a normal prior would produce a normal posterior distribution. More limited, but much faster than MCMC. https://en.wikipedia.org/wiki/Conjugate_prior A third, more complicated mathematical approach is called Variational Bayesian approximation.
The normal-normal model ?|?,?? ?????? ??? ??~? ?0,??0 ????????? ????? 2~? ??,?? 2 2< ????? ???? ???????? ?? ?0< ????? ???? ??0 2 2< ????? ???? ??????????? Posterior 2) ?1< ????????? ???? ??1 ??|?~?(?1,??1 2< ????????? ???? ??????????? 2 ?0+ 1/?? 1/??0 2 2 ? 2 ?1=1/??0 ??0 2+ ?? = ?0+ ? ?0 2+ 1/?? 2 ??0 1 2= 1 2+ 1 ?? 2 ??1 ??0
The normal-normal model ?|?,?? ??~? ?0,??0 ????????? ????? 2~? ??,?? 2 ?????? ??? 2< ????? ???? ???????? ?? ?0< ????? ???? ??0 2 2< ????? ???? ??????????? Posterior 2) ?1< ????????? ???? ??1 ??|?~?(?1,??1 2< ????????? ???? ??????????? 2 ?0+ 1/?? 1/??0 2 2 ? 2 ?1=1/??0 ??0 2+ ?? = ?0+ ? ?0 2+ 1/?? 2 ??0 1 2= 1 2+ 1 ?? The posterior mean is a weighted average of the prior mean and the data 2 ??1 ??0
The normal-normal model ?|?,?? ??~? ?0,??0 ????????? ????? 2~? ??,?? 2 ?????? ??? 2< ????? ???? ???????? ?? ?0< ????? ???? ??0 2 2< ????? ???? ??????????? Posterior Prediction Prediction error 2) ?1< ????????? ???? ??1 ??|?~?(?1,??1 2< ????????? ???? ??????????? 2 ?0+ 1/?? 1/??0 2 2 ? 2 PE weighting (if data is more noisy, It has a lower effect) ?1=1/??0 ??0 2+ ?? = ?0+ ? ?0 2+ 1/?? 2 ??0 1 2= 1 2+ 1 ?? 2 ??1 ??0
The normal-normal model ?|?,?? ??~? ?0,??0 ????????? ????? 2~? ??,?? 2 ?????? ??? 2< ????? ???? ???????? ?? ?0< ????? ???? ??0 2 2< ????? ???? ??????????? Posterior 2) ?1< ????????? ???? ??1 ??|?~?(?1,??1 2< ????????? ???? ??????????? 2 ?0+ 1/?? 1/??0 2 2 ? 2 ?1=1/??0 ??0 2+ ?? = ?0+ ? ?0 2+ 1/?? 2 ??0 1 2= 1 2+ 1 ?? Uncertainty always decreases (precision increases) 2 ??1 ??0
The normal-normal model ??|?,?? ??~? ?0,??0 ????????? ????? 2~? ??,?? 2 2 ?????? ??? 2< ????? ???? ???????? ?? ?0< ????? ???? ??0 2< ????? ???? ??????????? Posterior The weighting of the data increases with an increase in N 2) ?1< ????????? ???? ??1 ??|?~?(?1,??1 2< ????????? ???? ??????????? 2 ?0+ ?/?? 1/??0 2 2 ? 2 ?1=1/??0 ??0 = ?0+ ? ?0 2+ ?/?? 2+ ?? 2/? ??0 The weighting of the data increases with an increase in N Larger increase in precision as N increases 1 2= 1 2+? 2 ??1 ??0 ??
Cue integration (e.g. Ernst & Banks, 2002,2004) Visual and haptic cues. Reliability of each cue is registered by participants after presenting each cue separately When both cues are present, the relative weight of each cue is determined by it s reliability The uncertainty for both cues should be smaller than that of each of the separate cues.
The Kalman filter (dynamic parameter estimation) ??|??,?? 2~? ??,?? 2 ?????? ??? 2< ????? ???? ???????? 2< ????? ??????? ???????? ?? ?? ?0< ????? ???? ??0 2 ??=1~? ?0,??0 2< ????? ??????????? 2 ??|?? 1~? ?? 1,?? Predict stage (prior) ??= ?? 1 2 ?? 2 2= ??? 1 2 ??? + ?? ?? 1 ?? Update stage (posterior) 2 2 ?? ?? 2 ??+ 1/?? 1/??? 2 ? 2 2 ??=1/??? ??? 2+ ?? = ??+ ? ?? 2 2+ 1/?? ??? ?? 1 ?? 1 1 1 ?? 2= 2+ 2 ??? ???
Kalman filter in the sensorimotor system (Wolpert et al., 1995)
Kalman filter in the sensorimotor system (Wolpert et al., 1995) Predict arm location given it s previous location and the (known) motor command (C) and (known) motor noise (?? ?) 2 Ci,?? ?? 1 ??
Kalman filter in the sensorimotor system (Wolpert et al., 1995) 2 Ci,?? ?? 1 ?? 2 ?? ?? Current sensory (visual) feedback that has some sensory noise (?? ?)
Kalman filter in the sensorimotor system (Wolpert et al., 1995) 2 Ci,?? ?? 1 ?? 2 ?? 2 ??? 2+ ?? ?? ??+ ? ?? 2 ???
Kalman filter in the sensorimotor system (Wolpert et al., 1995) Move arm in the dark (large sensory variance) and estimate it s location Can see location at initial location Assuming that the forward model (prediction) tends to overestimate movement strength Subjects should rely mostly on the forward model and thus to overestimate final location At first, the variance of the visual input increases, and causes increased reliance on forward model (increased bias) Then, as the variance of the forward model gradually increases, subjects should rely less on it and bias should decrease Final estimate uncertainty increases as the forward model variance increases.
Simulation reversal learning ?~?(10,72) ?~?( 10,72)
Simulation reversal learning (Steady state, known variance) known variance) Steady state,
Simulation reversal learning (Steady state, known variance) known variance) Steady state,
Simulation reversal learning (Steady state, known variance) known variance) Steady state,
Simulation reversal learning (Kalman filter Kalman filter)
Simulation reversal learning (Kalman filter Kalman filter)
Simulation reversal learning (Kalman filter Kalman filter)
The normal/normal-inverse gamma model ?|?,?? ?0< ? ? ????? ?????? ???? ??0< ? ? ????? ??? ?? ??????? ?0< ????? ???? 2~? ??,?? 2 ?????? ?? 2~????????(?0 2,??0 ?? 2) 2 2~? ?0,?? ??|?? ????????? ??????????? ????? ??? ? ?0 Posterior 2~????????(?0+ ? ,??1 2) ?? ? < ?????? ???? ??1< ? ? ????????? ??? ?? ??????? ?1< ????????? ???? 2 2 2,?~? ?1,?? ??|??1 ?1 2 ??|?~??0+? ?1,?? ?1 ?1=?0 ?0+? ? ?0+ ? ?0? ?0+ ? ?0 ?2 ??1= ??0+ ?? ?2+
Simulation reversal learning (Steady state, unknown variance) unknown variance) Steady state,
Simulation reversal learning (Steady state, unknown variance) unknown variance) Steady state,
Simulation reversal learning (Steady state, unknown variance) unknown variance) Steady state,
Bayesian belief networks Eckel, C. C., El-Gamal, M. A., & Wilson, R. K. (2009). Risk loving after the storm: A Bayesian-Network study of Hurricane Katrina evacuees. Journal of Economic Behavior & Organization, 69(2), 110- 124. Demographic variables Gambling task PANAS (emotions questionnaire)