Chapter 6-10

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

regression example

- The line shows the overall pattern (the trend) but individuals deviate from this pattern (the scatter) Individual deviation = residual = observed value - predicted value = y - y^

if we want better precision (narrower CI)

- then we will need bigger sample e.g. more data

For a given confidence level, increasing the sample size, n:

decreases the value of the standard error of the estimate and in the case of MEANS decreases the t-multiplier, in cases of PROPORTIONS, df=∞, t=1.96, same decreases the width of the confidence interval makes our estimate statement more precise

Paired Data

- A paired data t-test is mechanically equivalent to a one sample t-test applied to the differences. - Paired data comparisons are beneficial when the variability within pairs is small compared to the variability between pairs.

Statistical Significance versus Practical Significance

- A statistically significant result means that a study has produced a small P-value (< 5%). A small P-value provides evidence about the existence of an effect or difference but says nothing about the size of that effect or difference. - The size of an effect is estimated with a confidence interval. Look at a confidence interval when determining the practical significance of an effect. - The size of a statistically significant effect can be so small as to have no practical importance at all, i.e., statistical significance does not imply practical significance.

Significance of p value

- As a description of the test result as (statistically) significant or nonsignificant - A test result is significant when the P-value is small enough (<0.05) i.e. it is the proportion of times that the null hypothesis will be rejected when it is actually true (i.e., it is the proportion of times action is taken when really no action should be taken).

Validity of the Chi-Square Test

- At least 80% of the expected counts must be 5 or more, and - Each expected count must be greater than 1.

The Difference between Confidence Intervals and Hypothesis Testing

- Chapter 6: We use confidence intervals to estimate the size of a population parameter (μ, p, μ 1 - μ 2 or p1 - p2 ), i.e., the confidence interval is an interval estimate for the size of a parameter. Chapter 7: We 'check out' the plausibility of a specified value for a parameter (μ 1 - μ 2 or p1 - p2), i.e., we test the hypothesis that the parameter has the specified value. Most commonly tested null hypotheses are of the "it makes no difference" variety

Underlying distribution

- For a random sample from some population, 'population distribution' and 'underlying distribution' are the same ideas. Similarly, in this case, 'population mean' and 'underlying mean' are the same ideas.

In summary: of H1 and H0

- H0 hypothesises one single value for the parameter, whereas H1 hypothesises a range of values for the parameter. - If the idea of doing a test has come as a result of looking at some data, then for H1 use ' ≠', a 2-sided alternative hypothesis

hypotheses for chi test

- H0: The distribution of variable 1 is the same for each level of variable 2. - H1: The distribution of variable 1 is not the same at all levels of variable 2. - The sampling situation determines which one of the two variables is variable 1 and which one is variable 2. - If independent samples are taken from different populations and sample members are categorised by a variable then the null hypothesis can be a statement of homogeneity among the populations, (i.e., variable 1 categorises the sample members and variable 2 defines the populations): H0: The distribution of variable 1 is the same for each population - This test for independence is often called a test for homogeneity. - If a single random sample has been cross classified by variable 1 and variable 2 then the null hypothesis can be a statement of homogeneity (sameness) and it doesn't matter which variable is variable 1 and which variable is variable 2.

The relationship between a 2-tailed t-test at the 5% level of significance and a 95% confidence interval for a parameter.

- If a 2-tailed test is significant at the 5% level (P-value less than 5%) then the hypothesised value is not plausible (not believable) and it will not be in the 95% confidence interval. - Conversely, if the hypothesised value is not in the 95% confidence interval it is not a plausible value and so H0 will be rejected at the 5% level. (I.e., The P-value will be less than 5%.) - If a 2-tailed test is not significant at the 5% level (P-value greater than 5%) then the hypothesised value is plausible (is believable) and so it will be in the 95% confidence interval. - Conversely, if the hypothesised value is in the 95% confidence interval it is a plausible value and so H0 will be not rejected at the 5% level. (I.e., The P-value will be greater than 5%.)

The Form of the Hypotheses in Chapter 7

- In this chapter we make hypotheses about the parameters μ1 - μ 2 and p1 - p2. We never make hypotheses about estimates such as x̄ 1 - x̄ 2 and p̂ 1 - p̂ 2. - We could also include the parameters for a single mean, μ and a single proportion, p.

Alternative words/expressions

- Independent: not related, not associated, not linked - Not independent: associated, related, linked

formula for the least sqaures regression line

- Minimise the sum of squared residuals / prediction errors - Minimise 2 ∑(residuals)^2 > There is one and only one least squares regression line for every linear regression > ∑residuals= 0 for the least squares line but also for many other lines > (x, y) is on the least squares line > A calculator or computer gives the equation of the least squares line > The equation of the least squares line is y= ^β0 + ^β1 x , where ^β0 is the y-intercept and ^β1 is the slope

If the t test conditioned satisfied

- Necessary conditions 1. and 2. above are satisfied if: 1. the data have been obtained through random sampling from distinct populations or 2. the data have arisen through random allocation of units to groups (i.e., an experiment). If the data have come from an observational study then there needs to be very careful consideration of exactly how, and under what conditions, the data were obtained when checking necessary conditions 1. and 2. Two-tailed t-tests are even more robust (less sensitive) with respect to the Normality assumption than one-tailed t-tests. That is, we can be even more relaxed about skewness in the data for two-tailed t-tests. Two sample t-procedures are even more robust against non-Normality than one sample t-procedures, especially for samples of similar size and distributional shape.

The Hypotheses

- Researchers conduct a study to check out a "hunch". - The hunch gives rise to a research hypothesis which the researchers try to establish as being true. - Hypothesis tests involve 2 competing statements called the null hypothesis and the alternative hypothesis.

Regression variables

- The X-variable is used to predict or explain the behaviour of the Y-variable - The X-variable is called the explanatory (or predictor or independent) variable - The Y-variable is called the response or (or outcome or dependent) variable

The Alternative Hypothesis, H1

- The alternative hypothesis corresponds to the research hypothesis. - It usually takes the form that something is happening, there is a difference or an effect, there is a relationship. - the researcher hopes to give support to H1 by showing that H0 is not believable.

Data- chi square

- The data are collected from a single random sample or a number of independent random samples. - The data are summarised by cross-classifying and presenting in a 2-way table of counts.

Evidence against the Null Hypothesis (f test)

- The data gives evidence against the null hypothesis when the differences between the sample/group means, x̄i, are large when compared with some sort of 'average overall spread' within the samples/groups. - In other words, the data gives evidence against the null hypothesis when the variability between the sample or group means,x̄i , is large relative to the variability within the samples or groups.

H0 and H1

- The researcher has a hunch which takes the role of the alternative hypothesis; he/she 'puts up' a null hypothesis against the alternative hypothesis. - He/she tests the 'plausibility' of the null hypothesis by asking "does the evidence (data) suggest that the null is simply not plausible/believable?". If so, then his/her alternative hypothesis (his/her hunch ) gains support.

F-Test for One-Way Analysis of Variance

- We test H0: All of the k (underlying/population) means, μ1, μ2,..., μK, are the same, i.e., μ1 = μ2 = ... = μK i.e., the grouping factor and the response variable are independent. Versus H1: Not all of the k (underlying/population) means, μ1, μ2,..., μK, are the same, i.e., a difference exists between some of the means, i.e., at least two of the means are different, i.e., at least one of the means is different from the others, i.e., the grouping factor and the response variable are related.

The Null Hypothesis, H0

- We test the null hypothesis. We determine how much evidence we have against H0. - The null hypothesis is typically a sceptical reaction to a research hypothesis. - The researcher hopes to disprove or reject H0. - We can never show or prove that H0 is true. !!

Inferences about the Slope and Intercept

- confidence intervals for the slope of the true line, estimate ± t standard errors= ^β0 ± t x se( ^β1) - when testing for no linear relationship between X and Y: we test H0: β1=0 - The t-test statistic for testing for no linear relationship between X and Y is: t0= (estimate- hypothesised value)/ standard error= (^β1-0)/ se( ^β1) For the simple linear model (i.e., when carrying out inferences on β0 or β1 ) use df= n-2 .

decreased t multiplier

- increased sample size - decreased confidence interval

The Sample Correlation Coefficient, r

- r has a value between -1 and +1 - r measures the strength and direction of the linear association between two numeric variables - r is a measure of how close the points come to lying on a straight line - The value of r is the same if the axes are swapped around - it doesn't matter which variable is X and which one is Y - r has no units - it doesn't matter what units are used for X and Y, e.g., if X is height and Y is weight then X could be in centimetres or inches, it doesn't matter, r would have the same value

Regression relationship= linear trend + constant scatter

1. E(Y) represents the expected value (or mean) of y for individuals in the population who all have the same particular value of x. 2. E(Y)= β0 + β1 x 3. β0 is the intercept of the straight line in the POPULATION. 4. β1 is the slope of the straight line in the population. 5. β0 and β1 the parameters of the straight line in the population, are unknown. 6. Because the standard deviation of the errors, is the same, regardless of the value of x, the simple linear model assumes "constant scatter".

Calculating an Estimate and the Margin of Error from a given confidence interval

1. Estimate: The estimate is the half-way point between the confidence limits (end points) of the confidence interval. To find the estimate add the two confidence limits and divide by 2 2. Margin of Error: The confidence interval is calculated by using the estimate ±margin of error. To find the margin of error, subtract the estimate from the upper confidence limit. - E.g., For a given confidence interval (15, 23): Estimate =(15+23)/2= 19 , Margin of Error = 23-19 = 4

Assumption for One Sample t-procedures (for a mean)

1. Observations (data) are independent of each other. 2. The Normality Assumption: The population or underlying distribution is Normal, i.e., the data have come from or appear to have come from a distribution which is bell-shaped (unimodal and symmetric). Multi-modes, clusters, and skewness are examples of non-Normal features and such features in the data suggest non-Normal features in the population or underlying distributions.

Necessary conditions when using t-procedures (for a mean from one sample)

1. Observations are independent (Check how the sample units were obtained) 2. Data don't show separation into clusters or have a multi-modal nature

Assumptions for two independent sample t-procedures (for means)

1. Observations from within the same sample/group are independent of each other. 2. The two samples/groups are independent, i.e., observations between samples/groups are independent of each other. 3. The Normality Assumption: The population or underlying distributions are Normal, i.e., the data in each sample/group appear to have come from a distribution which is bell-shaped (unimodal and symmetric). Multi-modes, clusters, and skewness are examples of non-Normal features and such features in the data suggest non-Normal features in the population or underlying distributions.

Assumptions for One-way ANOVA F-test

1. Observations from within the same sample/group are independent, e.g., the samples/groups are random. 2. The samples/groups are independent, i.e., observations from different samples/groups are independent. 3. The underlying distributions (or populations) are Normal. 4. The standard deviations of the underlying distributions (or populations) are equal. Data should not suggest clusters or multimodes and should not be too strongly skewed (take into account the sample/group size).

Necessary Conditions when using t-Procedures (for comparing 2 means from 2 independent samples/groups)

1. Observations within the samples are independent - CRITICAL! 2. The two samples/groups are independent - CRITICAL! 3. Data do not suggest an underlying separation into clusters or a multi-modal nature 4. Further properties of the sample data: The bigger the sample/group sizes, the more relaxed we can be about non-Normal features such as skewness and outliers.

Step-by-Step Guide to Performing a Hypothesis Test by Hand

1. State the parameter of interest (symbol and words). e.g., is it μ 1 - μ 2 or p1 - p2? 2. State the null hypothesis, H0. 3. State the alternative hypothesis, H1. e.g., is it H1 :μ1 -μ2 ≠ 0 or H1 :μ1 -μ2 > 0 or H1 :μ1 -μ2 < 0? 4. State the estimate and its value. 5. Calculate the test statistic: e.g., for a t-test statistic: - Use t0= (est-hypothesised value/std error) - standard error will be provided 6. Estimate the P-value. (Will be provided) 7. Interpret the P-value. 8. Calculate the confidence interval. 9. Interpret the confidence interval using plain English. 10. Give an overall conclusion.

Step by Step guide to producing a Confidence Interval

1. State the parameter to be estimated. (Symbol and words) Is it μ, p, μ1 - μ 2, or p1 - p2? 2. State the estimate and its value. 3. Write down the formula for a CI, estimate ± t ×se(estimate), from the Formula Sheet 4. Use the appropriate standard error. (provided) 5. Use the appropriate t-multiplier. ( provided) 6. Calculate the confidence limits. (End points of the confidence interval) 7. Interpret the interval using plain English.

Two forms of the two independent sample t-test

1. The pooled form of the two sample t-test carries the assumption of equal (underlying/population) standard deviations, i.e., assumes σ1 = σ2. (We do not use this form in this course.) 2. The non-pooled form (called the Welch two sample t-test ) which does not require the assumption that the (underlying/ population) standard deviations are equal, i.e., no need to assume σ1 = σ2. The formulae for the se(estimate) and hence their values are different for the two forms. We use the non-pooled form or the Welch two sample t-test in this course.

confidence interval precision goals

1. to have a high level of confidence in the method that we are using, i.e.,work 95% of the time. 2. our estimation statement to be precise, i.e., we want our confidence interval to be narrow.

Chi Square Test statistic formula

= - O is the observed count for a particular cell (comes from the data) - E is the expected count for a particular cell - what we expect the count to be if the null hypothesis was true. E =(Ri x Ci)/n where Ri and Cj are the row and column totals for cell (i, j ) and n is the total count for the table. - ((O - E)^2) / 2 for a particular cell is referred to as the cell contribution to the Chi-square test statistic. - Hence the Chi-square test statistic is the sum of all the cell contributions.

observed y value

= predicted value + residual

regression relationship

= trend + scatter

formula for CI

A confidence for the true value of a parameter can be constructed using : estimate ± t × se(estimate The piece being added and subtracted, ± t ×se(estimate), is often called the margin of error.

large and small p value f test

A large P-value indicates that: - the null hypothesis is plausible - the differences we see between the sample/group means could be explained simply in terms of random fluctuations (i.e. just due to chance alone) A small P-value: - suggests the null hypothesis is not true, i.e., differences exist between some of the (underlying/ population) means - gives no indication of which (underlying/ population) means are different - gives no indication of the size of any differences

Checking the Conditions

Assumptions 1. and 2. are satisfied if: 1. the data have been obtained through random sampling from distinct populations or 2. the data have arisen through random allocation of units to groups (i.e., an experiment). - The F-test is robust against departures from the Normality assumption as in the two sample t-test (particularly when the sample sizes are very similar and ntot is large). - use (largerst std deviation)/(smallest std deviation) <2 as a guide

The Least Squares Regression Line

Choose the line with the smallest sum of squared residuals / prediction errors.

Confidence Interval (LMCI) and UMCI) and Prediction Interval (LICL, UICI)

Confidence Interval for the Mean - This estimates the MEAN Y-value at a specified value of x. - The width of the interval allows for: uncertainty about the values of β0 and β1 Prediction Interval: - This predicts the Y-value for an INDIVIDUAL with a specified value of x. - The width of the interval allows for: uncertainty about the values of β0 and β1 and uncertainty due to the random scatter about the line. For a given value of x, the 95% prediction interval is always wider than the 95% confidence interval for the mean.

Hypotheses of Chi-square test

H0: The two variables are independent (are not related or are not associated). H1: The two variables are not independent (are related or are associated).

Confidence level effect on CI width:

If we want to be MORE CONFIDENT that the confidence interval includes the unknown true parameter value then we will need a WIDER interval.

The t-test

In a t-test we use the t-statistic (t0) as the test statistic and we call it the t-test statistic. When the hypothesised value is correct (i.e., H0 is true), we assume that the sampling distribution of the t-test statistic is approximately a Student(df ) distribution, i.e., a Student's t-distribution.

f test one way

In the reading methods example we were investigating whether one categorical variable (factor), had an effect on a numeric variable, Increase. This type of analysis is called one-way ANOVA because we are using only one explanatory variable

Simple Linear Model Assumptions (LINE)

Linearity: The relationship between x and the mean value of Y at X = x is linear. Normality: The random errors are Normally distributed with mean zero. Constant (Equal) spread: The random errors all have the same standard deviation, σ, regardless of the value of x. Independence: The random errors are all independent.

Situation (c):

One sample of size n, many "Yes/No" items

Situation (b)

One sample of size n, several response categories

P value relates to

P value

The P-value for f test - one sided only !!

P-value = pr(F ≥ f0 ) , where F ~ F(df1, df2)

Situation (a):

Proportions from two independent samples (or two independent subgroups)

Standard Error of an Estimate

Sampling variability.: If samples of a given size, n, were repeatedly taken from a population, then the sample means would vary from sample to sample. Standard Deviation of these sample means is a measure of the variability of the sample means. Standard Error: of the sample mean, se(x ), is an estimate of this standard deviation. calculated using se(x ) = s/√n where s is the sample standard deviation and n is the sample size.

P-value for chi test

The P-value is the conditional probability of observing a test statistic as extreme as that observed or more so, given that the null hypothesis is true.

What does the P-value measure?

The P-value measures the strength of evidence against the null hypothesis, H0.

Interpreting the P-value

The P-value measures the strength of evidence against the null hypothesis, H0. The smaller the P-value, the stronger the evidence against H0.

The t-statistic

The t-statistic (t0) measures the distance between the estimate and the hypothesised value in terms of standard errors§, i.e., it tells us how many standard errors the estimate is away from the hypothesised value.

Standard error of the sample mean se(x̄)

The standard error of the sample mean, se(x̄), measures, roughly, the average distance between a sample mean, x̄ , and the population mean, μ, (over all possible random samples of a given size that can be taken from the population.)

t multiplier with df

The t-multiplier, t, decreases as the degrees of freedom, df, increases.

what is t statistic

The t-statistic (t0) measures the distance between the estimate and the hypothesised value in terms of standard errors i.e., it tells us how many standard errors the estimate is away from the hypothesised value. The t-statistic can also be thought of as comparing the distance between the data-estimate and the hypothesised value to the standard error.

What does the test statistic measure?

The test statistic measures the discrepancy between what we see in the data and what we would expect to see if the null hypothesis, H0, was true.

Difference between Two Proportions p1-p2

There are three different sampling situations from which we may want to compare proportions. (a, b, c)

Chi Square Test statistic meaning

This is a measure of the difference between the observed counts and the expected counts under the null hypothesis. - There will be evidence against the null hypothesis if there are relatively large differences between the observed and expected counts in one or more cells. As with the t-test and the F-test, the greater the magnitude or size of the test statistic, the stronger the evidence against the null hypothesis. - Under the null hypothesis, we assume the sampling distribution of the test statistic is a Chi-square(df) distribution.

Checking the Assumptions for Simple Linear Model

Three of the four assumptions in the linear regression model are about errors. The residuals ( ^μ= yi- ^yi ) from the fitted model are estimates of these unobserved errors. The residuals are used to help check the model assumptions. Look at the following scatter plots for linearity, constant spread and outliers: - Y versus X - Residuals versus X or residuals versus Y (i.e., a residual plot) - If the fitted line summarises the average pattern in the data then the residual plot should be a patternless horizontal band centred at zero.

precision and width of confidence interval relative to sample size

To double the precision, i.e., to reduce the width of the confidence interval to a half, we need 4 times as many observations. To triple the precision, i.e., to reduce the width of the confidence interval to a third, we need 9 times as many observations.

Degrees of Freedom for two sample t

Using df= Min(n1 -1, n2 -1). the smaller conservative value for df makes the confidence interval produced by-hand wider than is necessary to ensure the method's success rate is 95%

The F-distribution

When the null hypothesis is true and the assumptions for F-procedures hold, the sampling distribution of the random variable corresponding to f0 is the F-distribution, with df1= k-1 and df2= n(tot)- k Note: 1. k is the number of groups 2. ntot is the total number of observations, i.e., ntot = n1 + n2 + ... + nK 3. df1 and df2 are the parameters of the F-distribution and we write F(df1, df2) The diagram shows a plot of the F(df1 = 3, df2 = 96) distribution.

Will t-procedures still 'work' even if the Normality Assumption is not true? (two and one sample_

Yes, under certain conditions t-tests and confidence intervals on two independent samples still work quite well even if the underlying distribution is skewed

The alternative hypothesis, H1, has one of the following forms:

a. H1: parameter ≠ hypothesised value (2-sided hypothesis) b. H1: parameter > hypothesised value (1-sided hypothesis) c. H1: parameter < hypothesised value (1-sided hypothesis) where the parameter is μ, p, μ 1 - μ 2 or p1 - p2

Why need CI?

estimates are almost always wrong. estimates are just estimates - A type of interval which contains the true value of a parameter for 95% of samples repeatedly taken in the long run is called a 95% confidence interval. - Our confidence in a particular interval comes from the fact that the method for constructing the confidence interval works most of the time.

F statistic

f0= (s2B)/(s2W) where s2B is a measure of the variability between the sample/group means, and s2W is a measure of the variability within the samples/groups. The total sample size also influences the value of f0.

Adjusting the Confidence Level, For a given sample size, increasing the confidence level:

increases the value of the t-multiplier, t increases the width of the confidence interval makes our estimate statement less precise

estimate the P-value via:

o Simulation approach (the randomisation test - a re-randomisation distribution is used to estimate the P-value) o Theoretical approach (the t-test - a Student's t-distribution is used to estimate the P-value).

Dealing with outliers (also applies to 2-independent sample t-test) :

o Try to determine if the outlier is real or a mistake. If it's a mistake then correct it or remove it from the data set. o Otherwise, one possible solution is to do the analysis with the outlier, then remove it and re-do the analysis and ask "Does removing the outlier make a difference to the conclusion?"

We deal with studies in which the data have been produced by:

o random allocation (allowing for experiment-to-causation inference) o random sampling from populations (allowing for sample-to-population inference).

Practical significance relates to

size of an effect

For a given level of confidence the bigger the sample size, n,

the narrower the confidence interval. (more precision)

For a given sample size: the greater the confidence level

the wider the confidence interval

The larger the test statistic,

then the greater the evidence against the null hypothesis.

for Paired Data

we analyse the differences.

The equation of the line

y= β0 + β1 x - β0 is the y-intercept and is where the line cuts/intercepts the y-axis - β1 is the slope of the line - it is the number in front of x - β1 = slope = Δy / Δx = describes how much change there is in the y-variable when the x-variable increases by 1 unit. If β1 = slope is positive then y increases as x increases - is negative then y decreases as x increases


संबंधित स्टडी सेट्स

Peds Exam 1: Growth and Development

View Set

Chemistry 9th grade final review

View Set

Med Surg 2 - Exam 1 - Spring 2021

View Set

Skin Integrity and Wound Healing

View Set

Strategic Management Chapter 1 Questions

View Set

Live Virtual Machine Lab 10.3: Module 10 Physical Network Security Concepts

View Set

Skeletal Muscle Tissue- Ch 9 Part 1

View Set

Network+ Guide to Networks (7th edition) chapter 3

View Set