Statistical Inference in Data Analysis: Application and Interpretation

statistics and data analysis n.w
1 / 46
Embed
Share

Explore statistical inference through the lens of credit modeling and default behavior analysis in the context of 13,444 credit card applications. Understand the significance of covariates in explaining crucial variables for lenders.

  • Statistical Inference
  • Data Analysis
  • Credit Modeling
  • Default Behavior
  • Covariates

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Statistics and Data Analysis Professor William Greene Stern School of Business IOMS Department Department of Economics 1/46 Part 12: Statistical Inference

  2. Statistics and Data Analysis Part 12 Statistical Inference: Confidence Intervals 2/46 Part 12: Statistical Inference

  3. 3/46 Part 12: Statistical Inference

  4. Statistical Inference: Point Estimates and Confidence Intervals Statistical Inference Estimation of Population Features Using Sample Data Sampling Distributions of Statisticss Point Estimates and the Law of Large Numbers Uncertainty in Estimation Interval Estimation 4/46 Part 12: Statistical Inference

  5. Application: Credit Modeling 1992 American Express analysis of Application process: Acceptance or rejection Cardholder behavior Loan default Average monthly expenditure General credit usage/behavior 13,444 applications in November, 1992 5/46 Part 12: Statistical Inference

  6. Modeling Fair Isaacs s Acceptance Rate 13,444 Applicants for a Credit Card (November, 1992) Experiment = A randomly picked application. Let X = 0 if Rejected Let X = 1 if Accepted Rejected Approved 6/46 Part 12: Statistical Inference

  7. The Question They Are Really Interested In: Default Of 10,499 people whose application was accepted, 996 (9.49%) defaulted on their credit account (loan). We let X denote the behavior of a credit card recipient. X = 0 if no default (Bernoulli) X = 1 if default This is a crucial variable for a lender. They spend endless resources trying to learn more about it. Mortgage providers in 2000-2007 could have, but deliberately chose not to. 7/46 Part 12: Statistical Inference

  8. The data contained many covariates. Do these help explain the interesting variable? 8/46 Part 12: Statistical Inference

  9. Variables Typically Used By Credit Scorers 9/46 Part 12: Statistical Inference

  10. Sample Statistics The population has characteristics Mean, variance Median Percentiles A random sample is a slice of the population 10/46 Part 12: Statistical Inference

  11. Populations and Samples Population features of a random variable. Mean = = expected value of a random variable Standard deviation = = (square root) of expected squared deviation of the random variable from the mean Percentiles such as the median = value that divides the population in half a value such that 50% of the population is below this value Sample statistics that describe the data Sample mean = = the average value in the sample Sample standard deviation = s tells us where the sample values will be (using our empirical rule, for example) x Sample median helps to locate the sample data on a figure that displays the data, such as a histogram. 11/46 Part 12: Statistical Inference

  12. The Overriding Principle in Statistical Inference The characteristics of a random sample will mimic (resemble) those of the population Mean, median, standard deviation, etc. Histogram The resemblance becomes closer as the number of observations in the (random) sample becomes larger. (The law of large numbers) 12/46 Part 12: Statistical Inference

  13. Point Estimation We use sample features to estimate population characteristics. Mean of a sample from the population is an estimate of the mean of the population: is an estimator of The standard deviation of a sample from the population is an estimator of the standard deviation of the population: s is an estimator of x 13/46 Part 12: Statistical Inference

  14. Point Estimator A formula Used with the sample data to estimate a characteristic of the population (a parameter) Provides a single value: x x a point estimator of N (x x) s a point estimator of N 1 N i 1 i = = = N i 1 = 2 = = i 14/46 Part 12: Statistical Inference

  15. Use random samples and basic descriptive statistics. What is the breach rate in a pool of tens of thousands of mortgages? ( Breach = improperly underwritten or serviced or otherwise faulty mortgage.) 15/46 Part 12: Statistical Inference

  16. The forensic analysis was an examination of statistics from a random sample of 1,500 loans. 16/46 Part 12: Statistical Inference

  17. Sampling Distribution The random sample is itself random, since each member is random. Statistics computed from random samples will vary as well. 17/46 Part 12: Statistical Inference

  18. Estimating Fair Isaacs s Acceptance Rate 13,444 Applicants for a Credit Card (November, 1992) Experiment = A randomly picked application. Let X = 0 if Rejected Let X = 1 if Accepted Rejected Approved The 13,444 observations are the population. The true proportion is = 0.780943. We draw samples of N from the 13,444 and use the observations to estimate . 18/46 Part 12: Statistical Inference

  19. The Estimator The sample proportion we are examining here is a sample mean. X = 0 if the individual's application is rejected X = 1 if the individual's application is accepted 1 N N = The "acceptance rate" is x = x. i i 1 The x is an estimator of , the population mean. population proportion is = 0.780943. 19/46 Part 12: Statistical Inference

  20. x in 100 samples with N = 144 in each sample 0.780943 is the true proportion in the population we are sampling from. 20/46 Part 12: Statistical Inference

  21. The Mean is A Good Estimator x Sometimes is too high, sometimes too low. On average, it seems to be right. The sample mean of the 100 sample estimates is 0.7844 The population mean (true proportion) is 0.7809. 21/46 Part 12: Statistical Inference

  22. What Makes it a Good Estimator? The average of the averages will hit the true mean (on average) The mean is UNBIASED (No moral connotations) 22/46 Part 12: Statistical Inference

  23. What Does the Law of Large Numbers Say? The sampling variability in the estimator gets smaller as N gets larger. If N gets large enough, we should hit the target exactly; The mean is CONSISTENT 23/46 Part 12: Statistical Inference

  24. N=144 .7 to .88 N=1024 .7 to .88 N=4900 .7 to .88 24/46 Part 12: Statistical Inference

  25. Uncertainty in Estimation How to quantify the variability in the proportion estimator --------+--------------------------------------------------------------------- Variable| Mean Std.Dev. Minimum Maximum Cases Missing --------+--------------------------------------------------------------------- Average of the means of the 100 samples of 144 observations RATES144| .78444 .03278 .715278 .868056 100 0 Average of the means of the 100 samples of 1024 observations RATE1024| .78366 .01293 .754883 .812500 100 0 Average of the means of the 100 samples of 4900 observations RATE4900| .78079 .00461 .770000 .792449 100 0 --------+--------------------------------------------------------------------- The population mean (true proportion) is 0.7809. 25/46 Part 12: Statistical Inference

  26. Range of Uncertainty The point estimate will be off (high or low) Quantify uncertainty in sampling error. Look ahead: If I draw a sample of 100, what value(s) should I expect? Based on unbiasedness, I should expect the mean to hit the true value. Based on my empirical rule, the value should be within plus or minus 2 standard deviations 95% of the time. What should I use for the standard deviation? 26/46 Part 12: Statistical Inference

  27. Estimating the Variance of the Distribution of Means We will have only one sample! Use what we know about the variance of the mean: Var[mean] = 2/N Estimate 2 using the data: Then, divide s2 by N. N i 1 = 2 (x N 1 x) = 2 s i 27/46 Part 12: Statistical Inference

  28. The Sampling Distribution For sampling from the population and using the sample mean to estimate the population mean: Expected value of will equal Standard deviation of will equal / N CLT suggests a normal distribution x x 28/46 Part 12: Statistical Inference

  29. The sample mean for a given sample may be quite far from the true mean The sample mean for a given sample may be very close to the true mean This is the sampling variability of the mean as an estimator of 29/46 Part 12: Statistical Inference

  30. Recognizing Sampling Variability To describe the distribution of sample means, use the sample to estimate the population expected value To describe the variability, use the sample standard deviation, s, divided by the square root of N To accommodate the distribution, use the empirical rule, 95%, 2 standard deviations. x 30/46 Part 12: Statistical Inference

  31. Estimating the Sampling Variability For one of the samples, the mean was 0.849, s was 0.358. s/ N = .0298. If this were my estimate, I would use 0.849 2 x 0.0298 For a different sample, the mean was 0.750, s was 0.433, s/ N = .0361. If this were my estimate I would use 0.750 2 x 0.0361 31/46 Part 12: Statistical Inference

  32. Estimates plus and minus two standard errors The interval mean 2 standard errors almost always includes the true value of .7809. The arrows show the cases in which the interval does not contain .7809. 32/46 Part 12: Statistical Inference

  33. How to use these results The sample mean is my best guess of the population mean. I must recognize that there will be estimation error because of random sampling. I use the confidence interval to suggest a range of plausible values for the mean, based on my sample information. 33/46 Part 12: Statistical Inference

  34. Will the Interval Contain the True Value? Uncertain: The midpoint is random; it may be very high or low, in which case, no. Sometimes it will contain the true value. The degree of certainty depends on the width of the interval. Very narrow interval: very uncertain. (1 standard errors) Wide interval: much more certain (2 standard errors) Extremely wide interval: nearly perfectly certain (2.5 standard errors) Infinitely wide interval: Absolutely certain. 34/46 Part 12: Statistical Inference

  35. The Degree of Certainty The interval is a Confidence Interval The degree of certainty is the degree of confidence. The standard in statistics is 95% certainty (about two standard errors). I can be more confident if I make the interval wider. I can be 100% confident if I make the interval infinitely wide. This is not helpful. 35/46 Part 12: Statistical Inference

  36. 67% and 95% Confidence Intervals 36/46 Part 12: Statistical Inference

  37. Monthly Spending Over First 12 Months Population = 10,239 individuals who (1) Received the Card (2) Used the card at least once (3) Monthly spending no more than 2500. What is the true mean of the population that produced these data? 37/46 Part 12: Statistical Inference

  38. Estimating the Mean Given a sample N = 225 observations = 241.242 S = 276.894 Estimate the population mean Point estimate 241.242 66 % confidence interval: 241.242 1 x 276.894/ 225 = 227.78 to 259.70 95% confidence interval: 241.242 2 x 276.894/ 225 = 204.32 to 278.162 99% confidence interval: 241.242 2.5 x 276.894/ 225 = 195.09 to 287.39 x 38/46 Part 12: Statistical Inference

  39. Where Did the Interval Widths Come From? Empirical rule of thumb: 2/3 = 66 2/3% is contained in an interval that is the mean plus and minus 1 standard deviation 95% is contained in a 2 standard deviation interval 99% is contained in a 2.5 standard deviation interval. Based exactly on the normal distribution, the exact values would be 0.9675 standard deviations for 2/3 (rather than 1.00) 1.9600 standard deviations for 95% (rather than 2.00) 2.5760 standard deviations for 99% (rather than 2.50) 39/46 Part 12: Statistical Inference

  40. Large Samples If the sample is moderately large (over 30), one can use the normal distribution values instead of the empirical rule. The empirical rule is easier to remember. The values will be very close to each other. 40/46 Part 12: Statistical Inference

  41. Refinements (Important) When you have a fairly small sample (under 30) and you have to estimate using s, then both the empirical rule and the normal distribution can be a bit misleading. The interval you are using is a bit too narrow. You will find the appropriate widths for your interval in the t table The values depend on the sample size. (More specifically, on N-1 = the degrees of freedom.) 41/46 Part 12: Statistical Inference

  42. Critical Values For 95% and 99% using a sample of 15: Normal: 1.960 and 2.576 Empirical rule: 2.000 and 2.500 T[14] table: 2.145 and 2.977 Note that the interval based on t is noticeably wider. The values from t converge to the normal values (from above) as N increases. What should you do in practice? Unless the sample is quite small, you can usually rely safely on the empirical rule. If the sample is very small, use the t distribution. 42/46 Part 12: Statistical Inference

  43. n = N-1 Small sample Large sample 43/46 Part 12: Statistical Inference

  44. Application A sports training center is examining the endurance of athletes. A sample of 17 observations on the number of hours for a specific task produces the following sample: 4.86, 6.21, 5.29, 4.11, 6.19, 3.58, 4.38, 4.70, 4.66, 5.64, 3.77, 2.11, 4.81, 3.31, 6.27, 5.02, 6.12 This being a biological measurement, we are confident that the underlying population is normal. Form a 95% confidence interval for the mean of the distribution. The sample mean is 4.766. The sample standard deviation, s, is 1.160. The standard error of the mean is 1.16/ 17 = 0.281. Since this is a small sample from the normal distribution, we use the critical value from the t distribution with N-1 = 16 degrees of freedom. From the t table (previous page), the value of t[.025,16] is 2.120 The confidence interval is 4.766 2.120(0.281) = [4.170,5.362] 44/46 Part 12: Statistical Inference

  45. Application: The Margin of Error The % is a mean of Bernoulli variables, Xi = 1 if the respondent favors the candidate, 0 if not. The % equals 100[(1/652) ixi]. (1) Why do they tell you N=652? (2) What do they mean by MoE = 3.8? (Can you show how they computed it?) Fundamental polling result: Standard error = SE = sqr[p(1-p)/N] MOE = 1.96 SE The 95% confidence interval for the proportion of voters who will vote for Clinton is 50% 3.8% = [46.2% to 53.8%] This does not overlap the interval for Trump, so they would predict Clinton to win the election (in NH). The result is not within the margin of error. Aug.6, 2015. http://www.realclearpolitics.com/epolls/2016/president/nh/new_hampshire_trump_vs_clinton-5596.html 45/46 Part 12: Statistical Inference

  46. Summary Methodology: Statistical Inference Application to credit scoring Sample statistics as estimators Point estimation Sampling variability The law of large numbers Unbiasedness and consistency Sampling distributions Confidence intervals Proportion Mean Using the normal and t distributions instead of the empirical rule for the width of the interval. 46/46 Part 12: Statistical Inference

Related


More Related Content