STA 106 T/F

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

As α increases, the power of a test tends to decrease.

FALSE. As α grows larger, it is easier to reject the null, and thus it would also be easier to reject the null if the null was false.

For the Single Factor ANOVA problem, if we cannot assume that the errors are normally distributed, we can never assume that the sample mean Yi is normally distributed.

FALSE. If the sample size ni is larger than 30 (and the sample was randomly taken from the same population), by the Central Limit Theorem we can assume the sample mean is normally distributed.

If we increase the confidence level of a confidence interval from 90% to 99%, the confidence interval will widen.

TRUE. This is because the percentage in the middle of the graph will become larger, and the tails become smaller. Or, because the value tα/2;nt−a (or any multiple) will increase, which would also increase the width. Or, because we have to be 99% confident and account for more possibilities, the interval must be wider.

As the sample size per group increases, the F test- statistic for single factor ANOVA tends to decrease.

FALSE. As the sample size increases, our outcomes become more precise and have a lower standard deviation, so that the F test-statistic should increase.

Multiplying a random variable by a non-zero constant will always increase the variance of the random variable.

FALSE. For example, if the constant is between 0 and 1, it would reduce the variance. An example would be if σY2 = 4, σ2{(1/2)Y } = 1/4 σY2 = 1

The Scheffe multiplier may be used for creating confidence intervals for μ11.

FALSE. For μ11 alone it is not a contrast, and is not appropriate for Scheffe (which assumes contrasts).

If a confidence interval for a contrast {μ1 - [(μ2-μ3)/2]} contains zero, we may conclude that there is evidence to suggest all Patch means (μ1, μ2, μ3) are equal

FALSE. It could be the case, for example, that μ1 = 10, μ2 = 200, μ3 = −180, so that (μ2 + μ3)/2 = 10, but clearly the means are not equal.

For Single Factor ANOVA, if the assumption of normality holds, the assumption of equal variance also holds.

FALSE. It is possible for each group to have very different standard deviations and all be approximately normal, but the overall data (or the errors, which are a linear combination of normal r.v's) would still be normally distributed.

If there are no interactions, a confidence interval for a contrast of μij has no practical meaning.

FALSE. It is still meaningful to consider treatment means, as they still have a practical meaning.

The estimated value of σε2 does not depend on the model chosen.

FALSE. Since SSE changes with each model, so does the estimated variance of the errors.

If all group sample sizes are over 30, we may assume (by the central limit theorem) that the assumptions of Two Factor ANOVA hold.

FALSE. The assumptions require that the populations are normal, variances are equal, independent groups, and a random sample. A sample size of over 30 means the sample means will be approximately normal, but the other assumptions are not met.

The calculation of the p-value depends on α.

FALSE. The calculation of the p-value is always the same :P (F > FS ), where FS is calculated with sample data. While we compare the p-value to α, it does not depend on the value of α itself.

A smaller p-value for a Shapiro-Wilks test suggests that the normality assumption of Single Factor ANOVA holds.

FALSE. The p-value is the probability we observed our sample data or more extreme, if in reality the errors were normally distributed. Therefore, we want a large probability, not a small one, if we want to conclude our errors are normal.

In a Two-Factor ANOVA with interactions, we have (a − 1)(b − 1) total parameters to estimate.

FALSE. There are (a − 1)(b − 1) interaction terms, but ab total parameters

The absolute difference in the SSE values may be used to assess which of the models fits better.

FALSE. This is not a relative measure, and depends on the units of SSE. Thus, we should use a partial R .

If we reject the null hypothesis for ANOVA, we can conclude that all means are significantly different from one another.

FALSE. We may conclude at least one mean is significantly different than the other means, but not necessarily that all means are different from one another.

If we reject the null hypothesis, we may always conclude there is evidence to suggest that all population means are different from each other.

FALSE. We may conclude at least one of the means is different, but not necessarily all of them.

For a (1−α)100% confidence interval for μi, it is appropriate to say the probability that μi is in the interval is 1 − α.

FALSE. We say we are 95% confident, but the probability that μi is in the interval is either 0 or 1 (unknown, but it is one of the two).

The estimated standard deviation of Yi· is σε/ nT (you may assume the assumptions of Single Factor ANOVA hold).

FALSE. What is given is the true standard deviation of Y. The estimated standard deviation would be replacing σε with MSE, and nT with ni.

When testing if a "reduced model" is a better fit than a "full model", the smaller the p-value the more evidence there is to suggest the "reduced model" is a statistically better fit.

FALSE. When the p-value is small, we reject the null hypothesis, which is the hypothesis that the reduced model is a better fit. The null is always referencing the smaller model.

If you used Y = sqrt(Y) as the "best" transformation for your data set and found a confidence interval for μ11 (using the transformed data) was (2, 4), the confidence interval for μ11 in terms of the original units would be (2^2,4^2).

FALSE. You cannot square the sample means to get back to the original units, since after we have transformed the data we have found the average of the square root of Y, and squaring that will result in something strange (and certainly not the average of the original Y ).

If interactions are significant, we must estimate at a minimum (a)(b) interaction terms.

FALSE.Because of the constraints we need only estimate (a−1)(b−1).

Adding a non-zero constant to a random variable does not change the variance of the random variable

TRUE. Adding a constant shifts the data set, but the distances from the mean remain the same. This is why σ2{a + Y } = σ2{Y }.

An outlier in ANOVA may cause a violation of the assumption of equal variance by groups, and also normality of the error values.

TRUE. An outlier severely effects the variance for a group, so may cause unequal variance. Normal distributions have a very low probability of an outlier, so would be very unusual to see.

The Bonferroni multiplier is appropriate to use with contrasts.

TRUE. Bonferroni is a very general correction, and may be used with all types of confidence intervals, including contrasts.

If we fail to reject the null hypothesis of ANOVA at level α, we would expect all (1 − α)100% pairwise confidence intervals (comparing two means) to contain zero.

TRUE. Failing to reject the null means we would conclude there is statistical evidence that all sample means are equal, which would suggest that every pairwise difference should contain zero (suggesting no difference in the population means.)

The value of a partial R^2 does not depend on the units of Y .

TRUE. It is the proportion of reduction in error, and proportions do not have any units. You can also see this through the equation (SSER−SSEF)/ SSER, which shows that the units of SSE would cancel.

In the "regression formation" of Two Factor ANOVA (with no interactions), the total number of β's is: a + b−1

TRUE. It is the same as the number of parameters in a Two Factor ANOVA model with γi, δj's, so is (a−1) + (b−1) + 1 = a + b − 1.

If we conduct two hypothesis tests for Single Factor ANOVA, and test A has a p-value of 0.00051, and test B has a p-value of 0.00000921, there is more evidence to reject the null for test B than for test A.

TRUE. It would be more unlikely to observe our sample data for test B, thus we have more evidence to reject the null.

In general, for a particular population, the larger nT , the larger SSTO is.

TRUE. SSTO is a sum of squared deviations, so that the more deviations you are summing and squaring, the larger the value.

If we reject the null hypothesis H0 : γi = 0, we conclude that at least one group mean for factor A is different from the others.

TRUE. Since our model is: Yij = μ· + γi + εij , γi is the estimated effect of group i on the overall mean. Thus, if a γi is non-zero, it would suggest that the group mean is different.

In the "regression formation" of Two Factor ANOVA, the value of the estimated intercept β0 is the estimated value of a treatment mean.

TRUE. The intercept represents the average value when all X indicator variables are 0, which would be the average value of Y at a specific combination of groups (or factors).

If Y is a random variable with mean μY , standard deviation σY , and Y ∗ = 10Y , the variance of Y ∗ is larger than the variance of Y .

TRUE. The variance of Y ∗ is 100 σY2 , so that whatever the value of σY2 , the variance will increase.

In Two-Factor ANOVA with no interactions, we have (a − 1) + (b − 1) + 1 total parameters to estimate.

TRUE. There are in general (a − 1) values of γi (since with the constraint we could trivally find the last value), similarly (b − 1) values of δj , and finally an estimate of μ·· .

If a confidence interval for some μi − μi' does not contain zero at (1 − α)100% confidence, this suggests we would reject the null hypothesis of single factor ANOVA.

TRUE. This is stating at least two means are significantly different from one another, so we would expect to reject the null that all means are equal.

The maximum value of a conditional or partial R2 is 1.

TRUE. This would be the case when the SSE of a particular model we are comparing to is zero.

It is possible for SSE to equal 0.

TRUE. This would occur if each group value was equal to the group mean.

If factor A effects are present, we would expect there to be a significant difference in at least one pair-wise comparison of factor A means.

TRUE. This would suggest that the overall mean would differ by Factor A levels, which would suggest at least one confidence interval for pairwise means does not contain zero.

To reduce the probability of a Type I error, we would want to use the lowest value of α possible.

TRUE. α is the probability of a type I error, so lowering α reduces the probability of a type I error.


संबंधित स्टडी सेट्स

Government 2305 Constitution Scavenger Hunt

View Set

CH 14 Overview of Shock and Sepsis

View Set

Quiz 1: THE SHOULDER AND POSTERIOR ARM

View Set

Life Insurance Exam Flashcards - Chapter 3

View Set

Review Questions Ethics (Ch. 1-5)

View Set

CompTIA A+ Complete Study Guide: Exam 220-1001 Fourth Edition

View Set

Hist 130 - Ole Miss iStudy Final

View Set

Lecture 4 - Governmental Programs

View Set

Unit 1 Key Terms- American History II

View Set