Psych 2220 Exam #2

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

One-tailed Test

"Directional" test. Ex) NULL: The average is not greater than the other average. RESEARCH: The average is greater than the other average Notation: H0: μ1≤μ2 H1: μ1>μ2 Cut off one tail of distribution. Less conservative than two-tailed.

Two-tailed Test

"Nondirectional" test ("difference"). NULL: There is no difference... RESEARCH: There is a difference... Notation: H0: μ1=μ2 H1: μ1≠μ2 Rejection region divided between 2 tails.

p Level (alpha)

Probability used to determine critical values. Typically 0.05 (may sometimes see 0.01): reject most extreme 5% of distribution. there is a 5% (or less) chance we would find results this extreme if the null were true. one-tailed or two-tailed matters here

Factors Affecting Power

1. Alpha Level Higher level increases power (e.g., from 0.05 to 0.10) Potential problem is that it increases chance for Type 1 error 2. 1- or 2-tailed test 1-tailed test increases power, but it's only helpful if certain of direction of effect 3. Sample size and variability Larger sample size and smaller standard deviation (reduce "noise") increases power 4. Actual difference (effect size) Increase difference between the means (stronger manipulation=more pronounced effect)

Three Assumptions for Hypothesis Testing

1. Dependent variable assessed with scale measure (can't be clearly nominal or ordinal) 2. Participants are randomly selected (can break if cautious about generalizing) 3. Population must have approximately normal distribution (usually ok if sample >30) Required to conduct analyses, but if not met, hypothesis tests are fairly robust, so still very accurate.

Calculating Confidence Intervals

1. Draw a picture of a distribution that will include confidence intervals (use SAMPLE MEAN for center of distribution) 2. Indicate the bounds of the confidence interval on the drawing (based on confidence interval percent) 3. Determine the z statistics that fall at each line marking the middle 95% 4. Turn the z statistic back into raw means 5. Check that the confidence intervals make sense

Six Steps of Hypothesis Testing

1. Identify the populations, distribution, and assumptions and then choose the appropriate hypothesis test 2. State the null and research hypotheses in both words and symbolic notation 3. Determine the characteristics of the comparison distribution 4. Determine the critical values, or cutoffs, that indicate the points beyond which we will reject the null hypothesis 5. Calculate the test statistic 6. Decide whether to reject or fail to reject the null hypothesis

Standardization

Allows for fair comparisons of data from two different scales. When comparing z scores, you use the # of standard deviations a score is from the mean.

Critical Region

Area beyond critical values (tails). Reject the null if test statistics is in this region.

Normal Curve

As sample size increases, the shape of the distribution becomes more like the normal curve. Must be scale variables. Most dependent variables are assumed to be normally distributed. Can use the area under the normal curve to calculate percentiles for any score.

Interval Estimate

Based on our sample statistic, range of sample statistics we would expect if we repeatedly sampled from the same population (e.g., confidence interval)

z Score

Can tell you how far away from the mean a score is in standard deviation units. Need mean (μ) to equal zero and standard deviation (σ) to equal one for the z distribution, so z=(X-μ)/σ where X is the raw score, μ is the population mean, and σ is the standard deviation of the population. They tell you where a value fits into a normal distribution.

Hypothesis Testing Assumption

Characteristic about a population that we are sampling necessary for accurate inferences

Statistically Significant

Data differ from what we would expect by chance if there were no actual difference (if null were true). Data is statistically significant when the test statistic is more extreme than the critical value(s), and the null is thus rejected.

The Central Limit Theorem

Distribution of sample means is normally distributed even when the population from which it was drawn is not normal. A distribution of means is less variable than a distribution of individual scores.

Cohen's d

Estimates effect size. Assesses difference between means using standard deviation instead of standard error. A small effect size is 0.2-0.49. A medium effect size is 0.5-0.79, and a large effect size is 0.8+. It's the degree to which the participants' mean exceeded the value expected by chance. It tells how many standard deviations higher the sample mean was from the population mean.

Parametric Tests

Inferential statistical test based on assumptions about a population

Nonparametric Tests

Inferential statistical test not based on assumptions about the population

Confidence Intervals

Interval estimate that includes the mean we would expect for the sample statistic a certain percentage of the time were we to sample from the same population repeatedly. Typically set at 95%. The range around the mean when we add and subtract a margin of error. Confirms findings of hypothesis testing and adds more detail.

Turning Confidence Intervals Into Raw Scores

M(lower)=-z(σm)+M(sample) M(upper)=z(σm)+M(sample) Use sample means and standard error (σm). z=z-score at confidence intervals (-1.96 and 1.96 for two-tailed 5% test)

Distribution of Means

Mean of the distribution tends to be the mean of the population. Standard deviation of the distribution tends to be less than the standard deviation of the population (of scores) and is referred to as standard error. Follows the central limit theorem. Shape of distribution approximates normal curve if: population of scores has normal shape OR size of each sample of distribution is N-30+. z=(M-μM)/σM where M is the sample mean, μM is the population mean, and σM is standard error.

Statistical Power

Measure of our ability to reject the null hypothesis, given that the null is false. The probability we will: reject the null when we should, find an effect (difference) when it really exists, and avoid Type II error (β), so power = 1-β In hypothesis-testing, we compare two states of the world: (H0 True) and (H0 False): can be represented as two distinct patterns of normal distributions. Alpha (α) cuts through the H0 True distributions, but also maps onto a point in the H0 False distributions for a given effect size, partitioning β and (1-β). So as alpha decreases, power decreases.

z Distribution

Normal distribution of standardized scores.

z Table

Provides % of scores between a given z score and mean AND % in tail of distribution.

Effect Size

Size of a difference that is unaffected by sample size. Allows for standardization across studies. To increase effect size you must decrease the amount of overlap between the two distributions. When two distributions have their means further apart or when the variation within each population is smaller, this is when you see a decrease in the amount of overlap, and thus, a larger effect size.

Standard Error

Standard deviation of the distribution of means. Smaller than standard deviation. Takes on smaller value as N increases. σM=σ/(√N)

Point Estimate

Summary statistic--one number as an estimate of the population (e.g., mean)

Critical Values

Test statistics needed to reject null. z score(s) needed to reject null. May have one critical value (one-tail) or two critical values (both tails) depending on our null hypothesis.


Set pelajaran terkait

CH 7 ETHICS, HW #1/#2, Ethics in Accounting Chapter 4 - Part 2, Chapter 4, Accounting ethics 4 - professional judgment in accounting, ACCT Ethics CH 5-8, ACG445 Chapter 1-4, Accounting Ethics Midterm 2, Ethics for Accountants - Test 3, Chapter 3 and...

View Set

Accounting 208: Chapter 1-4: Quiz

View Set

Chapter 18 Regulation of Gene Expression

View Set

Környezetvédelem vizsga 9.rész

View Set