Bayesian Estimation and Modeling Issues in Econometrics

econometrics i n.w
1 / 35
Embed
Share

Learn about Bayesian estimation in econometrics, including the specification of conditional likelihood, priors, posterior density, and computation of Bayesian estimators. Explore modeling issues, convergence of Bayesian and Classical MLE methods, and practical problems in sampling from joint posteriors.

  • Bayesian estimation
  • Econometrics
  • Modeling issues
  • Posterior density
  • Regression

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Econometrics I Professor William Greene Stern School of Business Department of Economics 24-1/35 Part 24: Bayesian Estimation

  2. Econometrics I Part 24 Bayesian Estimation 24-2/35 Part 24: Bayesian Estimation

  3. Bayesian Estimators Random Parameters vs. Randomly Distributed Parameters Models of Individual Heterogeneity Random Effects: Consumer Brand Choice Fixed Effects: Hospital Costs 24-3/35 Part 24: Bayesian Estimation

  4. Bayesian Estimation Specification of conditional likelihood: f(data | parameters) Specification of priors: g(parameters) Posterior density of parameters: f f = (data |parameters) (parameters) (data) f g (parameters|data) Posterior mean = E[parameters|data] 24-4/35 Part 24: Bayesian Estimation

  5. The Marginal Density for the Data is Irrelevant f(data| )p( ) f(data) L(data| )p( ) f(data) f( |data) = = Joint density of and data is f(data, ) =L(data| )p( ) Marginal density of the data is f(data)= f(data, )d L(data| )p( ) ) L(data| )p( )d = L(data| )p( )d = Thus, f( |data L(data| )p( )d = Posterior Mean = p( |data)d L(data| )p( )d Requires specification of the likeihood and the prior. 24-5/35 Part 24: Bayesian Estimation

  6. Computing Bayesian Estimators First generation: Do the integration (math) (data | ) ( ) (data) f f g ( |data) = E d Contemporary - Simulation: (1) Deduce the posterior (2) Draw random samples of draws from the posterior and compute the sample means and variances of the samples. (Relies on the law of large numbers.) 24-6/35 Part 24: Bayesian Estimation

  7. Modeling Issues As n , the likelihood dominates and the prior disappears Bayesian and Classical MLE converge. (Needs the mode of the posterior to converge to the mean.) Priors Diffuse large variances imply little prior information. (NONINFORMATIVE) INFORMATIVE priors finite variances that appear in the posterior. Taints any final results. 24-7/35 Part 24: Bayesian Estimation

  8. A Practical Problem Sampling from the joint posterior may be impossible. E.g., linear regression. + v 1 + 2 v 2 [vs ] (v ) [ b 1 2 2 2 vs (1/ ) K /2 2 1 1/2 f( , | , ) y X e [2 ] | ( X X ) | 2 + 2) 2 1 1 What is this??? T o do 'simulation based estimation' here, we need joint observations on ( , ). exp( (1/2)( ( X X ) ] ( b )) 2 24-8/35 Part 24: Bayesian Estimation

  9. A Solution to the Sampling Problem 2 The joint posterior, p( , For inference about , a sample from the marginal posterior, p( |data) would suffice. For inference about , a sample from the marginal p osterior of , p( |data) would suffice. Can we deduce these? For this problem, we do have conditionals: p( | ,data) = N[ , ( ) ] (y p( | ,data) = K 2 |data) is intractable. But, 2 2 2 2 2 1 b X'X i 2 x ) 2 = a gamma distributio n i i Can we use this information to sample from p( |data) and p( |data)? 2 24-9/35 Part 24: Bayesian Estimation

  10. The Gibbs Sampler Target: Sample from marginals of f(x1, x2) = joint distribution Joint distribution is unknown or it is not possible to sample from the joint distribution. Assumed: f(x1|x2) and f(x2|x1) both known and samples can be drawn from both. Gibbs sampling: Obtain one draw from x1,x2 by many cycles between x1|x2 and x2|x1. Start x1,0 anywhere in the right range. Draw x2,0 from x2|x1,0. Return to x1,1 from x1|x2,0 and so on. Several thousand cycles produces the draws Discard the first several thousand to avoid initial conditions. (Burn in) Average the draws to estimate the marginal means. 24-10/35 Part 24: Bayesian Estimation

  11. Bivariate Normal Sampling 1 0 0 1 Draw a random sample from bivariate normal , v v u u u u 1 1 1 = (1) Direct approach: where are two 2 2 2 r r 1 0 independent standard normal draws (easy) and = 1 2 1 1 2 = = 1 . such that '= , 1 . 2 2 (2) Gibbs sampler: v | v ~ N v , 1 1 2 2 2 v | v ~ N v , 1 2 1 1 24-11/35 Part 24: Bayesian Estimation

  12. Gibbs Sampling for the Linear Regression Model 2 2 1 p( | ,data) = N[ , b ( X'X (y ) ] i 2 x ) 2 p( | ,data) = K i i 2 = Iterate back and forth between these two distributions a gamma distribution 24-12/35 Part 24: Bayesian Estimation

  13. Application the Probit Model i = 1 if y * > 0, 0 otherwise Consider estimation of and y * (data augmentation) (1) If y* were observed, this would be a linear regression (y would not be useful since it is just sgn(y *).) We saw in the linear model before, p( (2) If (only) were observed, y * would be a draw from the normal distribution with mean Bu i t, y gives the sign of y *. y *| ,y is a draw from the truncated normal (above if y=0, below if y=1) (a) y * (b) y x + ~ N[0,1] i i i = i i i i i | y *,y ) i i i i x and variance 1. i i i 24-13/35 Part 24: Bayesian Estimation

  14. Gibbs Sampling for the Probit Model (1) Choose an initial value for (maybe the MLE) (2) Generate y * by sampling N observations from the truncated normal with mean truncated above 0 if y (3) Generate by drawing a random normal vector with mean vector ( ) * and variance matrix ( (4) Return to 2 10,000 times, retaining the last 5,000 draws - first 5,000 are the (5) Estimate the posterior mean of by averaging the last 5,000 draws. (This corresponds to a uniform prior over .) i i and variance 1, x = = 0, from below if y 1. i i -1 -1 X'X X'y X'X ) 'burn in.' 24-14/35 Part 24: Bayesian Estimation

  15. Generating Random Draws from f(X) The inverse probability method of sampling random draws: If F(x) is the CDF of random variable x, then a random draw on x may be obtained as F (u) where u is a draw from the standard uniform (0,1). Exampl i Truncated Normal: x= + x i = + -1 es: Exponential: f(x)= exp(- x); F(x)=1-exp(- x) x = -(1/ )log(1-u) Normal: F(x) = (x); x = -1 [1-(1-u)* ( )] for y=1; i [u (- )] for y=0. (u) -1 i -1 24-15/35 Part 24: Bayesian Estimation

  16. ? Generate raw data Calc ; Ran(13579) $ Sample ; 1 - 250 $ Create ; x1 = rnn(0,1) ; x2 = rnn(0,1) $ Create ; ys = .2 + .5*x1 - .5*x2 + rnn(0,1) ; y = ys > 0 $ Namelist; x = one,x1,x2$ Matrix ; xxi = <x x> $ Calc ; Rep = 200 ; Ri = 1/(Rep-25)$ ? Starting values and accumulate mean and variance matrices Matrix ; beta=[0/0/0] ; bbar=init(3,1,0);bv=init(3,3,0)$$ Proc = gibbs $ Markov Chain Monte Carlo iterations Do for ; simulate ; r =1,Rep $ ? ------- [ Sample y* | beta ] -------------------------- Create ; mui = x'beta ; f = rnu(0,1) ; if(y=1) ysg = mui + inp(1-(1-f)*phi( mui)); (else) ysg = mui + inp( f *phi(-mui)) $ ? ------- [ Sample beta | y*] --------------------------- Matrix ; mb = xxi*x'ysg ; beta = rndm(mb,xxi) $ ? ------- [ Sum posterior mean and variance. Discard burn in. ] Matrix ; if[r > 25] ; bbar=bbar+beta ; bv=bv+beta*beta'$ Enddo ; simulate $ Endproc $ Execute ; Proc = Gibbs $ Matrix ; bbar=ri*bbar ; bv=ri*bv-bbar*bbar' $ Probit ; lhs = y ; rhs = x $ Matrix ; Stat(bbar,bv,x) $ 24-16/35 Part 24: Bayesian Estimation

  17. Example: Probit MLE vs. Gibbs --> Matrix ; Stat(bbar,bv); Stat(b,varb) $ +---------------------------------------------------+ |Number of observations in current sample = 1000 | |Number of parameters computed here = 3 | |Number of degrees of freedom = 997 | +---------------------------------------------------+ +---------+--------------+----------------+--------+---------+ |Variable | Coefficient | Standard Error |b/St.Er.|P[|Z|>z] | +---------+--------------+----------------+--------+---------+ BBAR_1 .21483281 .05076663 4.232 .0000 BBAR_2 .40815611 .04779292 8.540 .0000 BBAR_3 -.49692480 .04508507 -11.022 .0000 +---------+--------------+----------------+--------+---------+ |Variable | Coefficient | Standard Error |b/St.Er.|P[|Z|>z] | +---------+--------------+----------------+--------+---------+ B_1 .22696546 .04276520 5.307 .0000 B_2 .40038880 .04671773 8.570 .0000 B_3 -.50012787 .04705345 -10.629 .0000 24-17/35 Part 24: Bayesian Estimation

  18. A Random Effects Approach Allenby and Rossi, Marketing Models of Consumer Heterogeneity Discrete Choice Model Brand Choice Hierarchical Bayes Multinomial Probit Panel Data: Purchases of 4 brands of Ketchup 24-18/35 Part 24: Bayesian Estimation

  19. Structure Conditional data generation mechanism * , it j i it j it j y Y y maximumutility among the J choices = = = x = = + x . Utility for consumer i, choice t, brand j , , , eatured") 1[ * ] , , it j it j (constant, log price, "availability," "f , it j ~ [0, ], 1 N , 1 it j j Implies a J outcome multinomial probit model. 24-19/35 Part 24: Bayesian Estimation

  20. Bayesian Priors Prior Densities N [ , V ~ ], i = + [ , w w V , ~ ] Implies N i i i = ~ [ , ] (looks like chi-squared), =3, 1 Inverse Gamma v s v s j j i Priors over model parameters ~ [ , = V 0 ], N a = = I 1 V V V ~ [ , ], 8, 8 Wishart v v 0 0 0 0 24-20/35 Part 24: Bayesian Estimation

  21. Bayesian Estimator Joint Posterior= Integral does not exist in closed form. Estimate by random samples from the joint posterior. Full joint posterior is not known, so not possible to sample from the joint posterior. [ ,..., E , , , ,..., | ] V data 1 1 N J 24-21/35 Part 24: Bayesian Estimation

  22. Gibbs Cycles for the MNP Model Samples from the marginal posteriors Marginal posterior for the individual parameters (Known and can be sampled) | , , , i data V Marginal posterior for the common parameters (Each known and each can be sampled) | , , data V V ta | , , da | V ,, data 24-22/35 Part 24: Bayesian Estimation

  23. Results Individual parameter vectors and disturbance variances Individual estimates of choice probabilities The same as the random parameters model with slightly different weights. Allenby and Rossi call the classical method an approximate Bayesian approach. (Greene calls the Bayesian estimator an approximate random parameters model ) Who s right? Bayesian layers on implausible uninformative priors and calls the maximum likelihood results exact Bayesian estimators Classical is strongly parametric and a slave to the distributional assumptions. Bayesian is even more strongly parametric than classical. Neither is right Both are right. 24-23/35 Part 24: Bayesian Estimation

  24. Comparison of Maximum Simulated Likelihood and Hierarchical Bayes Ken Train: A Comparison of Hierarchical Bayes and Maximum Simulated Likelihood for Mixed Logit Mixed Logit ( , , ) ( , , ) 1,..., individuals, 1,..., choice situations 1,..., alternatives (may also vary) j J = = + ( , , ), i t j x U i t j i t = i t j i = N T i 24-24/35 Part 24: Bayesian Estimation

  25. Stochastic Structure Conditional Likelihood i x exp( ) , , i j t = Prob( , , ) i j t J x exp( ) , , i j t x i = 1 j exp( ) T , *, i j t x i = Likelihood J = 1 t exp( ) , *, i j t i = 1 j = * indicator for the specific choice made by i at time t. j Note individual specific parameter vector, i 24-25/35 Part 24: Bayesian Estimation

  26. Classical Approach = [ , b ~ = = ]; write N b + w b + v i i i = = 1/2 where ( ) ( ) diag uncorrelated i j + + b w ) x + + exp[( ] T N , *, i j t i = w log Log likelihood d i J = = 1 i 1 t w b w ) x exp[( ] , , i j t i i = 1 j b, Maximize over (random parameters model) using maximum simulated likel ihood 24-26/35 Part 24: Bayesian Estimation

  27. Bayesian Approach Gibbs Sampling and Metropolis-Hastings N = ( | , ) Posterior L data priors i = 1 i = b ( ,..., ( ,..., ( assumed parameters b| | , | ) ( ) Prior N IG g normal 1 N ) ( ) ( ) parameters Inverse gamma Normal with large variance 1 N ) 24-27/35 Part 24: Bayesian Estimation

  28. Gibbs Sampling from Posteriors: b , ) = [ ,(1/ ) ] ( | b ,..., p Normal N 1 N N = (1/ ) N i = 1 i Easy to sample from Normal with known mean and variance by transforming a set of draws from standard normal. 24-28/35 Part 24: Bayesian Estimation

  29. Gibbs Sampling from Posteriors: + + | , b ( ,..., ) ~ [1 ,1 ] p Inverse Gamma N NV 1 k N k N = 2 (1/ ) ( ) for each k=1,...,K V N b , k k i k = 1 i Draw from inverse gamma for each k: Draw 1+N draws from N[0,1] = h , r,k (1+N ) V then the draw is k R 2 r,k h = 1 r 24-29/35 Part 24: Bayesian Estimation

  30. Gibbs Sampling from Posteriors: i = | , b | b ( ) ( | ) ( , ) p M L data g i i i M=a constant, L=likelihood, g=prior (This is the definition of the posterior.) Not clear how to sample. Use Metropolis Hastings algorithm. 24-30/35 Part 24: Bayesian Estimation

  31. Metropolis Hastings Method : Define = = an 'old' draw (vector) the 'new' draw (vector) d = , = = ,0 i ,1 i v r r =a constant (see below) the diagonal matrix of standard deviations =a vector of K draws from standard normal r v 24-31/35 Part 24: Bayesian Estimation

  32. Metropolis Hastings: A Draw of i = + : Trial value d ,1 ( ( ,0 i i r )( ) Posterior Posterior = ,1 i ) R Ms cancel ,0 i = a random draw from U(0,1) U If U < R, use During Gibbs iterations, draw controls acceptance rate. Try for , else keep ,1 ,0 i i ,1 i .4. 24-32/35 Part 24: Bayesian Estimation

  33. Application: Energy Suppliers N=361 individuals, 2 to 12 hypothetical suppliers X= (1) fixed rates, (2) contract length, (3) local (0,1), (4) well known company (0,1), (5) offer TOD rates (0,1), (6) offer seasonal rates (0,1). 24-33/35 Part 24: Bayesian Estimation

  34. Estimates: Mean of Individual i MSL Estimate Bayes Posterior Mean -1.04 (0.0374) Price -1.04 (0.396) Contract -0.208 (0.0240) -0.194 (0.0224) Local 2.40 (0.127) 2.41 (0.140) Well Known 1.74 (0.0927) 1.71 (0.100) TOD -9.94 (0.337) -10.0 (0.315) Seasonal -10.2 (0.333) -10.2 (0.310) 24-34/35 Part 24: Bayesian Estimation

  35. Reconciliation: A Theorem (Bernstein-Von Mises) The posterior distribution converges to normal with covariance matrix equal to 1/n times the information matrix (same as classical MLE). (The distribution that is converging is the posterior, not the sampling distribution of the estimator of the posterior mean.) The posterior mean (empirical) converges to the mode of the likelihood function. Same as the MLE. A proper prior disappears asymptotically. Asymptotic sampling distribution of the posterior mean is the same as that of the MLE. 24-35/35 Part 24: Bayesian Estimation

More Related Content