Psych Exam 2 Thought Q's

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

How does the sampling distribution of the mean change when sigma is unknown?

- When sigma is KNOWN, the sampling distribution would resemble the population distribution exactly - When sigma is UNKNOWN, there is more variability therefore the curve widens and t tests are used o Curve will be similar to the population dist. But will not be exact

What assumptions are behind independent groups tests? What is meant if we say that a given test is robust regarding these assumptions? What does this tell us regarding the importance of the assumptions for the independent groups test?

Assumptions: - Dependent variable is scale - Participants randomly selected (hardly ever; must be careful when generalizing about population) - Distribution of population is normal (or N>30) - NEW ONE: homogeneity of variance (population variances of two or more samples are equal) Robust: when assumptions not met but test provided good results - These assumptions improve the quality of research if they are met but not meeting them does not necessarily invalidate research

Be able to describe Cohen's effect size. What is meant, in practical terms, for a small, medium, or large effect size?

Cohen's d: a measure of effect size that expresses the difference between two means in terms of standard deviation; standardized diff. between means; NOT influenced by sample size; based on spread of distribution - Small effect size: 0.2 - Medium effect size: 0.5 - Large effect size: 0.8

What does it mean when we say that a statistical test is robust?

Robust: hypothesis tests are those that produce fairly accurate results even when data suggest the population might not meet some of the assumptions

For the population SS/N = σ2, but for a sample SS/n is a biased estimator of σ2. Explain what that means and why it occurs. Why would SS/n typically underestimate the population variance?

SS/n is a biased estimator of standard deviation for a sample because there is likely to be some level of error when we estimate the population SD from a sample

How does the correlation between sets of scores influence the outcome for dependent groups? What does this suggest regarding the use of matching variables?

Same individual for before and after score, so each individual serves as own comparisons à less variability.

How is the sampling distribution altered for dependent groups vs. independent groups? Compare the size of the standard error that one would typically obtain for dependent vs. independent groups.

Sampling distribution for dependent groups: distribution of mean difference scores Sampling distribution for independent groups: distribution of differences between means Size of standard error for dependent groups: typically always smaller than that of an independent groups

There are multiple ways of evaluating the outcome from an investigation. Compare the interpretation of: significance, confidence interval, effect size, power, and proportion of variance accounted for.

Significance: tells if something happened by chance Confidence interval: tell where we can expect the population mean to fall 95% of the time Effect size: tells us if we should care at all about the significance of a stud; helps us understand how much overlap there is between the two distributions Power: helps determine the likelihood we will be able to reject the null hypothesis and not be incorrect - Power is defined as probability of type II error (β). In other words, it is the probability of detecting a difference between the groups when the difference actually exists (ie. the probability of correctly rejecting the null hypothesis). Therefore, as we increase the power of a statistical test, we increase its ability to detect a significant (ie. p ≤ 0.05) difference between the groups. Proportion of variance accounted for: tells us the correlation of our variables (IV and DV); if there is a strong or weak correlation

What is meant by power? Why is it important? Be sure to understand our power diagram and be able to interpret variations.

Statistical power: measure of our ability to reject the null hypothesis, given that the null is false - The probability that we will reject the null when we SHOULD - Probability we will AVOID a type II error - As power decreases, there is less of a chance of making a type II error - Should have a value of 80% or .80 minimum in order to correctly reject the null hypothesis and follow through with conducting a study

What is meant by "statistical significance"? What is meant by "p < .05"? This is a probability of what? Under what circumstances do we make such statements?

Statistical significance: the results of the study are unlikely to occur if the null hypothesis is true (because we REJECTED the null hypothesis) - P-value: probability of us getting the results from the study, if null hypothesis were true - P<.05 means there is strong evidence against the null hypothesis, and means statistical significance

What are the six steps of hypothesis testing?

Step 1: identify the population(s), comparison distribution, and assumptions Step 2: state the null and research hypothesis Step 3: determine characteristics of comparison distribution (mean and standard error) Step 4: determine the critical values, or cutoffs Step 5: calculate the test statistic (z or t) Step 6: make a decision (to reject or fail to reject the null)

What alternatives to traditional hypothesis testing are available? How do they differ (assumptions, interpretation, etc.)?

The Bayesian Approach: can determine how likely our observed data are to occur under ANY hypothesis - probability defined as our degree of belief in hypothesis, taking into account both the data we have collected and beliefs we had about that hypothesis prior to collecting the data - each new piece of data is evaluated in terms of a growing set of prior beliefs (rather than in a vacuum)

Be able to explain how the CLT permits us to estimate the position of m if we only know sample information? Why is it that we can take info from one sample and infer things about the population?

The CLT says the greater the N, the closer to a normal curve. If the sample size is large enough, then, we can use these data points to create an approximate normal curve and base our population estimates on this. - Can use a distribution of means, which is less variable and resembles the population - Can take info from one sample and infer things because of generalizability

What is meant by dependent groups? Give examples of research designs appropriately analyzed with these techniques.

This is within-groups design test: same group is exposed to control variable and the experimental variable - This type of design has greater power than single-sample t test - EX: before and after results after condition presented to same group

What is the value of replicating a study that has already found statistically significant results?

To see if it is practically significant, external validity, and generalizability amongst different population.

What is meant by independent groups designs (aka between subjects designs or between groups designs)? Give examples.

Using a between groups design, which means there are two groups assigned to different conditions of the IV; each group is independent of the other.

What is a confidence interval, and what does it reveal? What is it centered around? How is it related to hypothesis testing? How is it different?

We expect to find the population mean within a certain interval a certain percent of the time (usually 95%) when we conduct this same study with the same sample size over and over. - Centered around the sample mean - We can make estimates about the population; the values of the population will likely fall in this interval

What is meant by a Type II error (again think in practical terms)? What are some factors that influence the likelihood of this error?

We failed to reject the null hypothesis when it should have been rejected; you thought you failed when you really succeeded (false negative) - Potential factors: not enough participants, low power, high standard deviation

Dividing SS by "n-1" results in a "sliding" adjustment, yielding an unbiased estimate of σ2. Explain the adjustment made by "n-1", especially as it relates to sample size.

We subtract 1 from sample size in the denominator to correct for the probability that the sample standard deviation slightly underestimates the actual standard deviation in the population. - Any sample is likely to have somewhat less spread than does the entire population - Subtracting one from a large sample makes only a small difference; subtracting one from a small sample is noticeable and makes large difference

Explain the way(s) in which the experimenter influences or has control over Type I and Type II errors (direct & indirect).

You can perform the test multiple times to see if you get the same results before reporting your findings; you can also evaluate the power of your tests before you take an action so you know ahead of time what the likelihood of rejecting/failing to reject the null hypothesis-Type I: experimenter sets alpha level-Type II: sample size choice, directionality chosen, dependence chosen

What is the difference between the t and z distributions? Explain why the t distribution needs to be wider than that for z?

t-tests are usually wider in distribution width and the area under the curve is not known so we must estimate it. - T statistic is more conservative than a z statistic is; it is not as extreme in value because it uses the estimated standard error instead of the actual population standard error

What are the three assumptions that underlie parametric tests?

- Dependent variable assessed using a scale measure - Participants randomly selected (typically NOT MET; must be cautious when generalizing) - Distribution of population approximately normal (N>30)

Why must a specific alternative hypothesis be identified to calculate power? How does power hypotheses differ from our original (null & research) hypotheses?

Center distribution around the alternative hypothesis (what we expect the mean to be) and power given is the probability of correctly rejecting the null hypothesis. - Need alternative hypothesis to see likelihood of correctly rejecting the null hypothesis

Why do we calculate confidence intervals?

We can validate if we rejected the null because 95% of the time, our data is not equal to the results expected by the null (no difference)

How are Type I and Type II errors different from experimental biases (malfeasance) in the conduct of research?

Biases are due to researcher's prior thoughts/opinions; errors are due to statistical calculation errors - Biases are "bad"; error is inevitable

Explain the connection between sampling distributions (as specified by the CLT) and hypothesis testing.

With the sampling distribution, we can take data points obtained from the sample and form a distribution that will resemble the population - Allows us to make estimates about the population in hypothesis testing

What are some criticisms of, or concerns regarding, traditional hypothesis testing?

the traditional approach: probability is based on what would happen in the long run if we repeat the same events over and over again - Calculate the probability of observing our data, assuming the null is true and assuming we have repeatedly drawn samples from the population - One problem: allows us only to reject or fail to reject the null hypothesis; can never simply reject or accept the research hypothesis - Consequence: do not learn the probability of our data occurring if the research hypothesis is true - Assumes we never learn from prior experience (comparison distribution is always some version of chance)

How is power affected by using independent vs. dependent groups? Discuss the impact of the size of the correlation on power.

A within-subjects design is more powerful, so fewer participants are needed. More participants needed in between-groups design to establish relationship between variables.

How is the shape of the t distribution affected by df? Explain why.

As N increases, which will increase the value of df, you will come closer and closer to the z distribution

How does considering our conclusions in terms of effect size help to prevent incorrect interpretations of our findings?

Based on the value of effect size, this value determines if our findings are actually significant in the study and something we should care about

What are the advantages and disadvantages of using a one-tailed test, and why do we almost always use a two-tailed test?

One-tailed test: a hypothesis test in which the research hypothesis is directional, positing either a mean decrease or mean increase in the DV Two-tailed test: a hypothesis test in which the research hypothesis does not indicate a direction of the mean difference or change in the DV; merely indicates there will be a mean difference - One-tailed tests only used when researcher is absolutely certain the effect cannot go in the other direction, but hardly common - We use a two-tailed test because it gives us both sides of the interval in order to help reject the null; in other words, we have a greater area for possible values to fall under in order to help us reject the null

Consider the pros and cons for using dependent groups for hypothesis testing.

Pros: less variability, less costs/less participants needed, better power, less error Cons: loss of extremes, carryover effects

If α = .05, what does that mean (in practical terms)? Why don't we set alpha even lower?

This means that the likelihood of committing a Type I error is 5%. - Lower values of alpha make it harder to reject the null hypothesis (less likely to detect a difference if there is one); choosing to lower the value can reduce the probability of making a Type I error

What is meant by a Type I error (be sure that you can describe it in practical terms, for a real investigation, not just referring to the null hypothesis)?

concluding that results are statistically significant when, in reality, they came about purely by chance or because of unrelated factors. - You should have failed to reject the null hypothesis, but instead you rejected it - You thought you were successful, but you weren't - False positive

How does specifying a desired effect size help in estimating sample size? How is this considered a short-cut, i.e., what information is no longer required by using the effect size?

- Don't need to know variability - Estimate ratio of cohen's d --> what effect size do you want?

We calculated the "proportion of variance accounted for" (r2) by the IV. What does this mean? What does it reveal about our experiment that isn't revealed by significance testing?

- How much of variability can be explained only by the experiment (some variability is attributed to individuals; this shows what is only from experiment) - Shows how much influence the IV had on the DV

What are three ways to increase the power of a statistical test?

- Increasing alpha level (e.g., 0.5 à 1) NOT ACCEPTABLE - Two-tailed into one-tailed test (DO NOT do) - Increasing sample size OR decreasing standard deviation (availability) (experimental control; reduce variability à less overlap à more power) - Increasing the difference between the means

Why do scientists want to minimize Type I errors? Think about potential costs associated with a Type I error?

- When reject the null, research usually published, while those who fail to reject usually are not; a great amount of work goes to "waste". If a study with a Type I error is published, then public is misled

Explain the difference between directional and non-directional hypotheses. What are the implications of using a one-tail vs. two-tail hypothesis test? When should each be used?

Directional: predicting either increase or decrease, NOT both H0: mean1 >= mean2 H1: mean 1 < mean 2 Non-directional: predicting a difference in EITHER direction H0: mean1=mean2 H1: mean 1 =/ mean 2 One-tail hypothesis test: hypothesis test which research hypothesis is directional, posting either the mean increases or decreases in DV, but not both, because of the IV - Rarely seen in research à only when absolutely certain effect CANNOT go in other direction Two-tail hypothesis test: hypothesis test which research hypothesis does NOT indicate a direction of the mean difference or change in the DV, but merely indicates there WILL be a mean difference Use unless stated otherwise

What does it mean to say test is directional or nondirectional?

Directional: the alternative hypothesis contains the less than or greater than sign. This indicates that we are testing whether or not there is a positive or negative effect. (one-tailed test) Non-directional: the alternative hypothesis contains the "not equal to" sign; can go in either direction (two-tailed test)

What does an effect size contribute that significance does not? Is it possible to derive significance, but have a weak effect size? When might this be more likely?

Effect size indicates the size of a difference and is unaffected by sample size; tells us how much two populations DO NOT overlap - The less overlap, the BIGGER the effect size (more difference between two means) - Significance tells us a difference between two means EXISTS, while effect size puts an actual number to it - Large sample sizes can be significantly different but have a weak effect size

What is meant by homogeneity of variance (homoscedasticity)? How can we test to see whether our data meet this assumption?

Homogeneity of variance: the two populations from which samples are selected must have equal variances - F-max= largest variance/smallest variance o Rule of thumb: if F-max greater than 2, two distributions are different variances (must be cautious!)

Be able to demonstrate the impact of several variables (e.g., a, n, directional/non-directional tests, difference to be detected, variability) on power and b. Think about what each does to our power diagram.

Increasing alpha level: obtain more power (less overlap) - Side effect of increasing probability of a Type I error from 55 to 10% N: increasing sample size, the distributions of means become more narrow and less overlap, less overlap means more statistical power One-tail versus two-tailed tests: when use one-tailed test, we increase chances of rejecting the null, which translates into an increase in statistic power. Mean difference between levels of the IV: as difference between means becomes larger, less overlap between curves. Less overlap has more statistical power. Variability: reduce variability --> less overlap --> more power

How does our sampling distribution change when we use two groups in hypothesis testing?

Independent-samples t test (between groups): we are using a distribution of differences between means, instead of distribution of mean differences (paired-samples t test) - More variability with the independent-samples t test so is more spread out (standard error is greater) than that of the paired-samples t test

What difficulties are encountered in estimating the required sample size for a given study? What kinds of information are required before you can estimate the appropriate sample size?

Need a lot of info: alpha, beta, power, variability, directionality, in/dependent, type of test

Explain what is represented by the null and alternate hypotheses? Be able to generate examples of each. Specify attributes of each. Why do we focus so much on the null hypothesis for hypothesis testing?

Null hypothesis shows there is NO difference between means and alternate hypothesis shows there IS a difference between means. - Both use population parameters - Comparison distribution is based on the null; can use it to see if the collected data is significantly different enough from the comparison distribution - We focus on null hypothesis to determine the likelihood that the results supporting the null hypothesis are not due to chance.

Sometimes we have equal n in two groups that are being compared and sometimes not. What impact does this have on our calculations? What is meant by a "pooled standard error", and why is that necessary?

Pooled standard variance involves taking an average of the two sample variances while accounting for any differences in the sizes of the two samples. Pooled variance is an estimate of the common population variance. - Estimate of variance from larger sample counts for more in the pooled variance than the smaller sample because larger sample tends to lead somewhat more accurate estimates than do smaller samples - When n in two groups are equal, they can just be averaged together (and it would be an unweighted average) and when they are unequal they are weighted (the one with the bigger n has more weight than the other)


संबंधित स्टडी सेट्स

Chapter 29 prep U EXAM 5 The nurse is working with a group of caregivers of children in a community setting. The topic of hospitalization and the effects of hospitalization on the child are being discussed. Which statement made by the caregivers supports

View Set

Intro to Kinesiology First Half Midterm (1-3)

View Set

*10/19 - 10/21 - Literature-Rikki-Tikki-Tavi - pg. 57-64

View Set

Lewis Chapter 60: Peripheral Nerve and Spinal Cord Problems

View Set

Ch. 3: Creating and Editing Charts Matching Quiz

View Set