
Understanding the T-Test: Applications, Varieties, and Usage
Learn about the T-test, its main uses, relationship with the normal distribution, when to choose it over a z-test, varieties, standard error of means, factors influencing its size, and more. Explore single-sample, independent samples, dependent t-tests, and the role of degrees of freedom in the t-distribution.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
The t-test Inferences about Population Means
Questions What is the main use of the t-test? How is the distribution of t related to the unit normal? When would we use a t-test instead of a z- test? Why might we prefer one to the other? What are the chief varieties or forms of the t- test? What is the standard error of the difference between means? What are the factors that influence its size?
More Questions Identify the appropriate to version of t to use for a given design. Compute and interpret t-tests appropriately. Given that = H H = = = : 75 ; : 75 ; 14 ; 49 ; . 2 01 s N t 0 1 (. 05 , 48 ) y construct a rejection region. Draw a picture to illustrate.
Background The t-test is used to test hypotheses about means when the population variance is unknown (the usual case). Closely related to z, the unit normal. Developed by Gossett for the quality control of beer. Comes in 3 varieties: Single sample, independent samples, and dependent samples.
What kind of t is it? Single sample t we have only 1 group; want to test against a hypothetical mean. Independent samples t we have 2 means, 2 groups; no relation between groups, e.g., people randomly assigned to a single group. Dependent t we have two means. Either same people in both groups, or people are related, e.g., husband-wife, left hand-right hand, hospital patient and visitor.
Single-sample z test For large samples (N>100) can use z to test hypotheses about means. X z . est . 2 ( ) X X 1 ( ) = s N = = X M est M Suppose H N M N = = = : 10 ; : 10 ; ; 5 200 H s N 0 1 X Then 5 5 s . = = = = 35 . X est M 14 14 . 200 N If 11 ( 10 ) = = = . 1 11 . 2 83 ; . 2 83 96 . 05 X z p 35 .
The t Distribution We use t when the population variance is unknown (the usual case) and sample size is small (N<100, the usual case). If you use a stat package for testing hypotheses about means, you will use t. The t distribution is a short, fat relative of the normal. The shape of t depends on its df. As N becomes infinitely large, t becomes normal.
Degrees of Freedom For the t distribution, degrees of freedom are always a simple function of the sample size, e.g., (N-1). One way of explaining df is that if we know the total or mean, and all but one score, the last (N-1) score is not free to vary. It is fixed by the other scores. 4+3+2+X = 10. X=1.
Single-sample t-test With a small sample size, we compute the same numbers as we did for z, but we compare them to the t distribution instead of the z distribution. ; 10 : ; 10 : 1 0 = H H = = ; 5 25 s N X 5 s 11 ( 10 ) = = = . = = = 11 1 X t 1 X est 1 M 25 N = (c.f. z=1.96) 1<2.064, n.s. (. 05 , 24 ) . 2 064 t X t M Interval = . 8 [ = 11 . 2 064 ) 1 ( 936 , 13 064 . ] Interval is about 9 to 13 and contains 10, so n.s.
R code for t distribution qt(c(.025, .975), df=7) [1] -2.364624 2.364624 qt(c(.025, .975), df=1000) [1] -1.962339 1.962339 pt(2, df=7) [1] 0.9571903 1-pt(2, df=7) [1] 0.04280966 If 2-tailed, would need p > .975 (or [1-p] less than .025) for significance
Review How are the distributions of z and t related? Given that 75 : ; 75 : 1 0 = H H = = = ; 14 ; 49 ; . 2 01 s N t (. 05 , 48 ) y construct a rejection region. Draw a picture to illustrate.
Difference Between Means (1) Most studies have at least 2 groups (e.g., M vs. F, Exp vs. Control) If we want to know diff in population means, best guess is diff in sample means. Unbiased: Variance of the Difference: Standard Error: = = ( ) ( ) ( ) E y y E y E y 1 2 1 2 1 2 = + 2 M 2 M var( ) y y 1 2 1 2 = + 2 M 2 M 1 2 diff
Difference Between Means (2) We can estimate the standard error of the difference between means. . = . + . 2 M 2 M est est est 1 2 diff For large samples, can use z X X diff z = = : = ; 0 : 0 H H ( ) ( ) 0 1 2 1 1 2 1 2 1 2 = = est 10 ; 100 = ; 2 X N SD diff 1 1 1 = = 12 ; 100 ; 3 X N SD 2 2 2 4 9 13 est = + = = . 36 . diff 100 ) 100 100 2 10 ( 12 0 = = = . 5 56 ; 05 . zdiff p 36 . . 36
Independent Samples t (1) ( ) ( ) y y = t Looks just like z: df=N1-1+N2-1=N1+N2-2 If SDs and Ns are equal, estimate is: 1 2 1 2 diff est diff 2 2 1 1 = + = + 2 diff N N N N 1 2 1 2 We assume that the variance in both groups is the same. There are varieties of the t-test that relax this assumption.
Independent Samples t Pooled Standard Error of the Difference (computed): + N + 2 1 2 2 ( ) 1 N ( ) 1 N s N s N N . = 1 2 1 2 est diff + 2 N N 1 2 1 2
Independent Samples t (2) + = 1 N N + 2 1 2 2 ( ) 1 ( ) 1 N s N s N N . 1 2 1 2 est diff + H 2 N ; 0 N H 2 1 2 : = : 0 0 1 2 = 1 = 1 2 ( ) ( ) y y = t 1 2 1 2 = 2 1 s 18 ; ; 7 = 5 y s N diff est diff 1 1 = = 2 2 20 ; . 5 83 ; 7 y N 2 2 + ) 7 ( 4 . 5 ( 6 83 ) 12 est = . 1 = . 47 diff + 5 7 2 35 2 18 ( 20 ) 0 = = = . 1 36 ; . . s n tdiff . 1 47 . 1 47 tcrit = t(.05,10)=2.23
Review What is the standard error of the difference between means? What are the factors that influence its size? Describe a design (what IV? What DV?) where it makes sense to use the independent samples t test.
Dependent t (1) Observations come in pairs. Brother, sister, repeated measure. = + 2 diff 2 M 2 M 2 cov( , ) y y 1 2 1 2 Problem solved by finding diffs between pairs Di=yi1-yi2. = N s ( ) D 2 ( ) D D . MD= i = D est D i 2 D s N N 1 ( ) D E D = t df=N(pairs)-1 . est MD
Dependent t (2) Diff 2 1 0 D Brother 5 7 3 = y Sister 7 8 3 = y D 2) ( D 1 0 1 5 6 = 1 = 2 ( ) D D = 1 sD est 1 = / 1 = = . 3 58 . 1 N MD ( ) D E D = = . 1 72 t . 58 . est MD
Assumptions The t-test is based on assumptions of normality and homogeneity of variance. You can test for both these (make sure you know how). As long as the samples in each group are large and nearly equal, the t-test is robust, that is, still good, even though assumptions are not met.
Review Describe a design where it makes sense to use a single-sample t. Describe a design where it makes sense to use a dependent samples t.
Strength of Association (1) Scientific purpose is to predict or explain variation. There are statistical indexes of how well our IV accounts for variance in the DV. These are measures of how strongly or closely associated our IVs and DVs are (omega-squared and R- squared are most commonly used). Variance accounted for: = 2 Y 2 Y 2 ( ) | X = 2 1 2 2 Y 2 Y 4
Strength of Association (2) How much of variance in Y is associated with the IV? 2 Y 2 Y 2 ( ) | X = = 2 1 2 2 Y 2 Y 4 Compare the 1st (left-most) curve with the curve in the middle and the one on the right. In each case, how much of the variance in Y is associated with the IV, group membership? More in the second comparison. As mean diff gets big, so does variance acct. 0.4 0.3 0.2 0.1 0.0 -4 -2 0 2 4 6
Association & Significance Power increases with association (effect size) and sample size. Effect size: X d ( = Note relation to z score 1 / ) 2 X p Significance = effect size X sample size. + 1 N samples) ( ) X ( ) X X = t = 1 2 t 2 (single sample) 1 1 (independent 2 p N N 2 t = d N Increasing sample size does not increase effect size (strength of association). It shrinks the standard error so power is greater, |t| is larger.
Estimating Power (1) If the null is false, the statistic is no longer distributed as t, but rather as noncentral t. This makes power computation difficult.
Noncentrality Howell introduces the noncentrality parameter delta to use for estimating power. For the one-sample t, Recall the relations between t and d on earlier slide = d n
Estimating Power (2) Suppose (Howell, p. 231) that we have 25 people, a sample mean of 105, and a hypothesized mean and SD of 100 and 15, respectively. Then 100 105 = = = d Howell presents an appendix where delta is related to power. For power = .8, alpha = .05, delta must be 2.80. To solve for N, we compute: / 1 3 . 33 15 = = . 1 = 33 . 25 65 d n 38 . power 2 2 8 . 2 = = = = = 2 ; . 8 48 71 91 . d n n . 33 d
Estimating Power (3) Dependent t can be cast as a single sample t using difference scores. Independent t. To use Howell s method, the result is n per group, so double it. Suppose d = .5 (medium effect) and n =25 per group. 25 n From Howell s appendix, the value of delta of 1.77 with alpha = .05 results in power of .43. For a power of .8, we need delta = 2.80 = = = . 1 = 50 . 5 . 12 5 . 77 d 2 2 2 2 8 . 2 = = = 2 2 62 72 . n 5 . d Need 63 per group.
SAS Proc Power single sample example proc power; onesamplemeans test=t nullmean = 100 mean = 105 stddev = 15 power = .8 ntotal = . ; run; Null Mean 100 Mean 105 Standard Deviation 15 Nominal Power 0.8 Number of Sides 2 Alpha 0.05 Computed N Total Actual N Power Total 0.802 73 The POWER Procedure One-sample t Test for Mean Fixed Scenario Elements Distribution Normal Method Exact
; 2 sample t Power Calculate sample size Two-sample t Test for Mean Difference Fixed Scenario Elements Distribution Normal Method Exact Mean Difference 0.5 Standard Deviation 1 Nominal Power 0.8 Number of Sides 2 Null Difference 0 Alpha 0.05 Group 1 Weight 1 Group 2 Weight 1 Computed N Total Actual N Power Total 0.801 128 procpower; twosamplemeans meandiff= .5 stddev=1 power=0.8 ntotal=.; run;
2 sample t Power The POWER Procedure Two-Sample t Test for Mean Difference Fixed Scenario Elements Distribution Method Number of Sides Mean Difference Standard Deviation 10 Total Sample Size Null Difference Alpha Group 1 Weight Group 2 Weight proc power; twosamplemeans meandiff = 5 [assumed difference] stddev =10 [assumed SD] sides = 1 [1 tail] ntotal = 50 [25 per group] power = .; *[tell me!]; run; Normal Exact 1 5 50 0 0.05 1 1 Computed Power Power 0.539
Typical Power in Psych Average effect size is about d=.40. Consider power for effect sizes between .3 and .6. What kind of sample size do we need for power of .8? Two-sample t Test for 1 procpower; twosamplemeans meandiff= .3 to .6 by .1 stddev=1 power=.8 ntotal=.; plot x= power min = .5 max=.95; run; Computed N Total Mean Actual N Index Diff Power Total 1 0.3 0.801 352 2 0.4 0.804 200 3 0.5 0.801 128 4 0.6 0.804 90 Typical studies are underpowered.
Power Curves Why a whopper of an IV is helpful. 600 500 400 Total Sample Size 300 200 100 0 0.5 0.6 0.7 0.8 0.9 1.0 Power Mean Diff 0.3 0.4 0.5 0.6
Review About how many people total will you need for power of .8, alpha is .05 (two tails), and an effect size of .3? You can only afford 40 people per group, and based on the literature, you estimate the group means to be 50 and 60 with a standard deviation within groups of 20. What is your power estimate?