The t Distributions

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

t Table

The t distribution becomes closer to the z distribution as sample size increases. Think of the z statistic as a single-blade Swiss Army knife and the t statistic as a multi-blade Swiss Army knife the includes the single blade that is the z statistic.

t distributions

The t distributions helps specify how confident to be about research findings. The t test based on the t distributions tells us how confident we can be that the sample differs from the larger population. The t distributions are more versatile than z distributions because they can be used when: 1. the population standard deviation is not known 2. comparing two samples For instance, t distributions can show a standard, normal z distribution, a t distribution of 30 individuals, t distribution of 30 individuals, t distribution of 8 individuals, t distribution of 2 individuals in one graph. Smaller samples in the distributions are wider and later that the z distribution. As sample size increases the t distributions look more the z distributions.

Six Steps of Hypothesis Testing and t statistic

The t statistic can help make an inference about test results.

Using the standard error to calculate the t statistic

The t statistic indicates the distance of a sample mean from a population mean in term of the estimated standard error. When conducting a single-sample t test, we calculate the t statistic. The formula is identical to that for a z statistic, except that it uses estimated standard error. Here is the formula for the t statistic for a distribution of means: t = (M-μm)/sm The denominator is the only difference between this formula for the t statistic and the formula used to compute the z statistic for a sample mean. The corrected denominator makes the t statistic smaller, thereby reducing the probability offing and extreme t statistic. A t statistic in not as extreme as a z statistic, in scientific terms, it's more conservative.

t tests

There are three types of t tests: 1. Single sample t test - comparing a sample mean to a population mean but do not know the population standard deviation. 2. Paired samples t test - comparing two samples and every participant is in both samples - a WITHIN group design. 3. Independent samples t test - comparing two sample and every participant is in only one sample - a BETWEEN groups design. The two groups may be a sample and a population, or two samples as part of a within-groups design or a between groups design.

Calculating the estimated standard deviation for the population

See Pages 216 - 217. Step 1. Calculate the sample mean. Use the sample mean to calculate the corrected sample deviation: M= (8+12+16+12+14)/5 = 12.4 Step 2. Use the sample mean in corrected formula for the standard deviation. s = √Σ(X-M)^2 /(N-1) The easiest way to calculate the numerator under the square root sign is to first organize the data into columns: X X-M (X -M)^2 8 -4.4 19.36 12 -0.4 0.16 16 3.6 12.96 12 -0.4 0.16 14 1.6 2.56 The numerator is: Σ(X-M)^2 Σ(19.36 + 0.16 + 12.96 + 0.16 + 2.56)= 35.2 Given a sample size of 5, the corrected standard deviation is: s = √Σ(X-M)^2 /(N-1) √35.2/5-1 =√8.8 = 2.97

Standard Deviation vs. Standard Error

Standard Deviation vas Standard Error The standard deviation is a measure of the dispersion, or scatter, of the data. The standard error provides an estimate of the precision of a parameter (such as a mean, proportion, odds ratio, survival probability, etc) and is used when one wants to make inferences about data from a sample (eg, the sort of sample in a given study) to some relevant population. SD m = σ/√N Given a statistical property known as the central limit theorem [5], we know that, regardless the distribution of the parameter in the population, the distribution of these means, referred as the sampling distribution, approaches a normal distribution with mean μ and standard deviation μm Since we know that in a normal distribution approximately 95% of the observations (in the present case, the observations are the means of each sample drawn from the population) fall within 1.96 standard deviations on each side of the mean (in the present case, this refers to the mean of the means), we can safely assume that ± 1.96 × sdm will contain 95% of the means drawn from the population. The problem is that when conducting a study we have one sample (with multiple observations), eg, s1 with mean m1 and standard deviation sd1, but we do not have or sdm. However, it happens that m1 is an unbiased estimate of μ and what is called the standard error, is our best estimate of sdm (the standard error is in essence the standard deviation of the sampling distribution of a random variable.

Single-Sample t test

The Single-Sample t test is a hypothesis in which we compare a sample from which we collect data to a population of which we know the mean but not the standard population.. The logic of the single-sample t test is a model for other t tests that allow us to compare two samples and all of the other more sophisticated statistical tests that will follow.

Degrees of Freedom and the t table

The degrees of freedom is the number of scores that are free to vary when we estimate a population parameter from a sample. When using the t distributions we use the t table. There are different t distributions for every sample size and the t table takes sample size into account. We do NOT look up the actual sample size on the table. We look up DEGREES OF FREEDOM. NOTE: The term Free to vary, by the way, refers to the number of scores that can take on different values when a given parameter is known. For example, if we know that the man of threes scores is 10, only two scores are free to vary. Once we know the values of two scores, we know the value of the third. If we know that two of the scores are 9 and 10, then we know that the third must be 11.

Single-sample t test formula

The formula for degrees of freedom for a single- sample t test is df = N-1 . To calculate degrees of freedom we subtract 1 from the sample size. We look up the t table for the df to find the relation between degrees of freedom and the critical value needed to declare statistical significance. As degrees of freedom go up, the critical values go down. One-tailed test at p level of 0.05 with only 1 degree of freedom (two observations), the critical t value is 6.314. With only 1 degree of freedom, the two means have to be extremely far apart and/or the standard deviation has to be very small to declare a statistically significant difference. 2 degrees of freedom (three observations), the critical t value drops to 2.920, making it easier to reach the critical value.

Estimating Population Standard Deviation from a Sample

This is done by using the sample standard deviation, and estimating the standard deviation is the only practical difference between conducting a z test with the z distribution and conducting a t test with the t distribution. Here is the sample standard deviation formula used up until now: SD √Σ(X-M)^2 /(N) Th correction to that formula needs to be make for some level of error when estimating the population standard deviation from a sample. One tiny alteration of the formula lead to a lightly larger and more accurate standard deviation. Instead of dividing by N we divide by (N-1) in the denominator. Subtracting 1 from the sample size in the denominator to correct for the probability that the sample standard deviation slightly underestimates the actual standard deviation in the population. s = √Σ(X-M)^2 /(N-1) Now the standard deviation is s instead of SD, still using the latin rather Greek letters because it is a statistic from a sample, rather than a parameter from a population.

Calculating the standard error for the t statistic

With an estimate of the standard deviation of the distribution of scores, and without an estimate of the spread of a distribution of means or standard error we make the spread ambler to reflect the fact that distribution of means is less variable than a distribution of scores. We do this exalt the same way that the z distribution is adjusted. Divide the s by √N for the standard error, as estimated from a sample. Notice the σ is replaced with s because we use the corrected sample for the standard error as estimated from a sample. sm = s/√N

Converting the corrected standard deviation to standard error

sm = 2/√N Just as the central limit theorem predicts, the standard error for the distribution of sample means is smaller than the standard deviation of sample scores.


Set pelajaran terkait

Pharmacology Made Easy 4.0 Gastrointestinal System

View Set

Electoral College and the Executive Branch

View Set

Module 1: Experimental Psychology and the Scientific Method

View Set

Narrative of the Life of Frederick Douglass Study Guide Questions

View Set

Unit 9: Ecology (Mohamed Elarag)

View Set

AP World History - Ming and Qing Dynasties

View Set