Sampling Distributions and Point Estimation

sampling distributions point estimation n.w
1 / 30
Embed
Share

Explore the concepts of sampling distributions, point estimation, standard error, maximum likelihood, bias, confidence intervals, and the central limit theorem. Discover how statistics are used to estimate parameters and delve into sampling distributions of means.

  • Sampling
  • Distributions
  • Point Estimation
  • Parameters
  • Statistics

Uploaded on | 1 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Sampling Distributions & Point Estimation

  2. Questions What is a sampling distribution? What is the standard error? What is the principle of maximum likelihood? What is bias (in the statistical sense)? What is a confidence interval? What is the central limit theorem? Why is the number 1.96 a big deal?

  3. Population Population & Sample Space Population vs. sample Population parameter, sample statistic

  4. Parameter Estimation We use statistics to estimate parameters, e.g., effectiveness of pilot training, effectiveness of psychotherapy. SD X

  5. Sampling Distribution (1) A sampling distribution is a distribution of a statistic over all possible samples of size N. To get a sampling distribution, 1. Take a sample of size N (a given number like 5, 10, or 1000) from a population 2. Compute the statistic (e.g., the mean) and record it. 3. Repeat 1 and 2 a lot (infinitely for large pops, the actual meaning is once for every possible combination). 4. Plot the resulting sampling distribution, a distribution of a statistic over all possible samples.

  6. Suppose Population has 6 elements: 1, 2, 3, 4, 5, 6 (like numbers on dice) We want to find the sampling distribution of the mean for N=2 If we sample with replacement, what can happen?

  7. 1st 2nd M 1st 2nd M 1st 2nd M 1 1 13 1.53 23 2.53 33 3.53 1.54 24 2.54 34 3.54 44 1 25 2.55 35 3.55 45 4.55 2.56 36 3.56 46 4.56 56 1 3 1 2 2 2 3.5 1 3 3 3 4 1 4 4 4 4.5 1 5 5 5 5 1 6 6 6 5.5 2 1 1 1 3.5 2 2 2 2 4 2 3 3 3 4.5 2 4 4 4 5 2 5 5 5 5.5 2 6 6 6 6 7 6 5 4 Possible Outcomes Series1 3 2 1 0 1 3 5 7 9 1113 15 17 19 21 23 25 2729 31 33 35

  8. Histogram Sampling distribution for mean of 2 dice. 1+2+3+4+5+6 = 21. 21/6 = 3.5 There is only 1 way to get a mean of 1, but 6 ways to get a mean of 3.5.

  9. Sampling Distribution (2) The sampling distribution shows the relation between the probability of a statistic and the statistic s value for all possible samples of size N drawn from a population. Hypothetical Distribution of Sample Means f(M) Mean Value

  10. Sampling Distribution Mean and SD The Mean of the sampling distribution is defined the same way as any other distribution (expected value). The SD of the sampling distribution is the Standard Error. Important and useful. Variance of sampling distribution is the expected value of the squared difference a mean square. Review = 2 G 2 ( ) E G G

  11. Review What is a sampling distribution? What is the standard error of a statistic?

  12. Statistics as Estimators We use sample data compute statistics. The statistics estimate population values, e.g., An estimator is a method for producing a best guess about a population value. An estimate is a specific value provided by an estimator. We want good estimates. What is a good estimator? What properties should it have? X

  13. Maximum Likelihood (1) Likelihood is a conditional probability. L is the probability (say) that x has some value given that the parameter theta has some value. L1 is the probability of observing heights of 68 inches and 70 [data] inches given adult males[theta]. L2 is the probability of 68 and 70 inches given adult females. In cards, L=p(Jack|Heart) = 1/13 Theta ( ) could be continuous or discrete. = = ( | ) L p x value

  14. Maximum Likelihood (2) Suppose we know the function (e.g., binomial, normal) but not the value of theta. Maximum likelihood principle says take the estimate of theta that makes the likelihood of the data maximum. MLP says: Choose the value of theta that makes this maximum: | ,... , ( 2 1 N x x x L )

  15. Maximum Likelihood (3) Suppose we have 2 values hypothesized for proportions of male grad students at USF, 50 and 40. We randomly sample 15 students and find that 9 are male. Calculate likelihood for each using binomial: 15 50 . 40 . = = = = = 9 6 ( ; 9 50 . , 15 ) 50 . 153 . L x p N 9 15 = = = = = 9 6 ( ; 9 40 . , 15 ) 60 . 061 . L x p N 9 The .50 estimate is better because the data are more likely.

  16. Likelihood Function The binomial distribution computes probabilities 0.25 0.2 Likelihood 0.15 0.1 0.05 0 0 0.2 0.4 0.6 0.8 1 -0.05 Theta (p value) Probabilities for the findings in the previous slide (9/15 are male).

  17. Maximum Likelihood (4) In example, best (max like) estimate would be 9/15 = .60. There is a general class called maximum likelihood estimators that find values of theta that maximizes the likelihood of a sample result. ML is one principle of goodness of an estimator

  18. More Goodness Bias. If E(statistic)=parameter, the estimator is unbiased. If it is unbiased, the mean of the sampling distribution equals the parameter. The sample mean has this property: . Sample variance is biased. Good because if unbiased, you will be exactly right on average. = (X ) E

  19. Sampling Distribution of the Mean Unbiased: Variance of sampling distribution of means based on N obs: Standard Error of the Mean: Law of large numbers: Large samples produce sample estimates very close to the parameter. = (X ) E 2 = = 2 V M M N = M N

  20. Unbiased Estimate of Variance It can be shown that: N 2 1 N = = 2 2 2 ( ) E S N The sample variance is too small by a factor of (N-1)/N. We fix with 2 s 2 ( ) X X N = = 2 S 1 1 N N Although the variance is unbiased, the SD is still biased, but most inferential work is based on the variance, not SD.

  21. Review What is the principle of maximum likelihood? Define bias Is the sample variance (SS divided by N) a biased estimator?

  22. Interval Estimation Use the standard error of the mean to create a bracket or confidence interval to show where good estimates of the mean are. The sampling distribution of the mean is nice* when N>20. Therefore: 3 ( M X X p + 3 ) 95 . M Suppose M=100, SD=14, N=49. Then SDM=14/7=2. Bracket = 100-6 =94 to 100+6 = 106 is 94 to 106. P is probability of sample not mu. * Unimodal and symmetric

  23. Review What is a confidence interval? Suppose M = 50, SD = 10, and N =100. What is the confidence interval? SEM = 10/sqrt(100) = 10/10 = 1 CI (lower) = M-3SEM = 50-3 = 47 CI (upper) = M+3SEM = 50+3 = 53 CI = 47 to 53

  24. Central Limit Theorem 1. Sampling distribution of means becomes normal as N increases, regardless of shape of original distribution. 2. Applies to other statistics as well (e.g., binomial, variance)

  25. Properties of the Normal If a (parent) distribution is normal, the sampling distribution of the mean is normal regardless of N. If a distribution is normal, the sampling distributions of the mean and variance are independent.

  26. Binomial as N Increases

  27. Distribution of Mean vs Data Notice that the raw data are horribly skewed, but the distribution of means computed on samples of size 50 drawn randomly from the horrid distribution looks fine.

  28. Confidence Intervals for the Mean Over samples of size N, the probability is .95 for Similarly for sample values of the mean, the probability is .95 that X X 96 . 1 . 1 . 1 96 96 X M M . 1 + 96 M M The population mean is likely to be within 2 standard errors of the sample mean. Can use the Normal to create any size confidence interval (85, 99, etc.)

  29. Size of the Confidence Interval The size of the confidence interval depends on desired certainty (e.g., 95 vs 99 pct) and the size of std error of mean ( ). Std err of mean is controlled by population SD and sample size. Can control sample size. = M M N SD 10. If N=25 then SEM = 2 and CI width is about 8. If N=100, then SEM = 1 and CI width is about 4. CI shrinks as N increases. As N gets large, decreasing change in CI because of square root. Less bang for buck as N gets big.

  30. Review What is the central limit theorem? Why is the number 1.96 a big deal? Assume that scores on a curiosity scale are normally distributed. If the sample mean is 50 based on 100 people and the population SD is 10, find an approx 99 pct CI for the population mean. Construct a 95 percent confidence interval for the Blackmore (car data) exercise mean.

Related


More Related Content