Exam 4 Psychology Statistics

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

In using a chi-square test, if each expected frequency is equal to its corresponding observed frequency, what is the value χ2?

0

In an ANOVA study on the impact that various forms of cellphone use have on driving speed, a researcher concludes that there are no systematic treatment effects. What was the F-ratio closest to?

1

Which of the following are accurate considerations of correlations? I. The value of a correlation can be affected greatly by the range of scores represented in the data. II. One or two extreme data points can have a dramatic effect on the value of a correlation. III. The effectiveness of a correlation is dramatically decreased for high SS values.

1 and 2 only

Which of the following is an assumption for using a chi-square test? I. Independence of observation II. The population is normal III. All expected frequencies are at least 5

1 and 3 only

Which of the following values represents a perfect correlation? I. -1 II. 0 III. 1

1 and 3 only

A researcher is conducting an ANOVA test to measure the influence of the time of day on reaction time. Participants are given a reaction test at three different periods throughout the day: 7 a.m., noon, and 5 p.m. In this design, there are _______ factor(s) and ______ level(s).

1, 3

Which of the following tests has a fundamental purpose of evaluating the significance of the relationship between two variables? I. Chi-square II. Pearson correlation III. Tests of mean difference IV. ANOVA

1, 3, and 4 only

Order of Mathematical Operations

1. Any calculation contained within parentheses is done first. 2. Squaring (or raising to other exponents) is done second. 3. Multiplying and/or dividing is done third. A series of multiplication and/or division operations should be done in order from left to right. 4. Summation using the ∑ notation is done next. 5. Finally, any other addition and/or subtraction is done.

The Characteristics of a Relationship

1. The Direction of the Relationship 2. The Form of the Relationship 3. The Strength or Consistency of the Relationship

Assumptions of the t Test

1. The values in the sample must consist of independent observations. 2. The population sampled must be normal.

H1

Alternative Hypothesis

Outliers

An outlier is an individual with X and/or Y values that are substantially different (larger or smaller) from the values obtained for the other individuals in the data set. The data point of a single outlier can have a dramatic influence on the value obtained for the correlation.

Cohen's w

Cohen introduced a statistic called w that provides a measure of effect size for either of the chi-square tests. The formula for Cohen's w is very similar to the chi-square formula but uses proportions instead of frequencies. In the formula, the Po values are the observed proportions in the data and are obtained by dividing each observed frequency by the total number of participants. (observed proportion = Po = fo / n) Similarly, the Pe values are the expected proportions that are specified in the null hypothesis. The formula instructs you to: 1. Compute the difference between the observed proportion and the expected proportion for each cell (category). 2. For each cell, square the difference and divide by the expected proportion. 3. Add the values from step 2 and take the square root of the sum. Cohen's also suggested guidelines for interpreting the magnitude of w, with values near 0.10 indicating a small effect, 0.30 a medium effect, and 0.50 a large effect.

Two-Tailed Test

Comes from the fact that the critical region is divided between the two tails of the distribution. This is by far the most widely accepted procedure for hypothesis testing. It requires a relatively large difference independent of direction.

Which situation would be appropriate for obtaining a phi-coefficient with a Pearson test?

Comparing gender with whether or not someone has a PhD

The Greek letter mu, μ

Identifies the mean for a population.

M

Identifies the mean for a sample.

N

Identifies the number of scores in a population.

n

Identifies the number of scores in a sample.

Alpha Value

Is a small probability that is used to identify the low-probability samples.

σ^2

Is the variance for a population.

H0

Null Hypothesis

The relationship between age and height in trees is most likely a _____________ correlation.

Positive

In a correlation study, X = the age of the participant, Y = the average driving speed on the highway, and Z = the average number of accidents per year. What does represent?

The correlation between age and average speed with yearly accidents held constant.

The Chi-Square Test for Independence

The chi-square test for independence uses the frequency data from a sample to evaluate the relationship between two variables in the population. Each individual in the sample is classified on both of the two variables, creating a two-dimensional frequency distribution matrix. The frequency distribution for the sample is then used to test hypotheses about the corresponding frequency distribution in the population. The null hypothesis for the chi-square test for independence states that the two variables being measured are independent. This general hypothesis can be expressed in two different conceptual forms, each viewing the data and the test from slightly different perspectives. 1. The data are viewed as a single sample with each individual measured on two variables. The null hypothesis states that there is no relationship between the two variables. 2. The data are viewed as two (or more) separate samples representing two (or more) populations or treatment conditions. The goal of the chi-square test is to determine whether there are significant differences between the populations. The null hypothesis states that there is no difference between the two populations. Two variables are independent when there is no consistent, predictable relationship between them. In this case, the frequency distribution for one variable is not related to (or dependent on) the categories of the second variable. As a result, when two variables are independent, the frequency distribution for one variable will have the same shape (same proportions) for all categories of the second variable. The chi-square test for independence uses the same basic logic that was used for the goodness-of-fit test. First, a sample is selected, and each individual is classified or categorized. Because the test for independence considers two variables, every individual is classified on both variables, and the resulting frequency distribution is presented as a two-dimensional matrix. As before, the frequencies in the sample distribution are called observed frequencies and are identified by the symbol fo. The next step is to find the expected frequencies, or fe values, for this chi-square test. As before, the expected frequencies define an ideal hypothetical distribution that is in perfect agreement with the null hypothesis. Once the expected frequencies are obtained, we compute a chi-square statistic to determine how well the data fit the null hypothesis. If R is the number of rows and C is the number of columns, and you remove the last column and the bottom row from the matrix, you are left with a smaller matrix that has C - 1 columns and R - 1 rows. The number of cells in the smaller matrix determines the df value. df = (R - 1) (C - 1)

n in an ANOVA

The number of scores in each treatment.

G

The sum of all the scores in the research study

The Test Statistic for ANOVA

The test statistic for ANOVA is very similar to the t statistics used in earlier chapters. For the t statistic, we first computed the standard error, which measures the difference between two sample means that is reasonable to expect if there is no treatment effect (that is, if H0 is true). For ANOVA, however, we want to compare differences among two or more sample means. With more than two samples, the concept of "difference between sample means" becomes difficult to define or measure. The solution to this problem is to use variance to define and measure the size of the differences among the sample means.

N in an ANOVA

The total number of scores in the entire study.

What do the chi-square test for independence, the Pearson correlation, and simple linear regressions all have in common?

They all evaluate the relationship between two variables.

Locating the Critical Region for a Chi-Square Test

To determine whether a particular chi-square value is significantly large, you must consult a chi-square distribution table. The first column lists df values for the chi-square test, and the top row of the table lists pro- portions (alpha levels) in the extreme right-hand tail of the distribution. The numbers in the body of the table are the critical values of chi-square.

Assumptions for the Chi-Square Tests

To use a chi-square test for goodness of fit or a test of independence, several conditions must be satisfied. For any statistical test, violation of assumptions and restrictions casts doubt on the results. Independence of Observations: This is not to be confused with the concept of independence between variables, as seen in the chi-square test for independence. One consequence of independent observations is that each observed frequency is generated by a different individual. Size of expected frequencies: A chi-square test should not be performed when the expected frequency of any cell is less than 5. The chi-square statistic can be distorted when fe is very small.

T

Treatment total

Post Hoc Tests (Posttests)

Tukey's Honestly Significant Difference (HSD) Test The Scheffe Test

Under what conditions might a post hoc test be performed following ANOVA?

When there are three treatments and the null hypothesis was rejected.

Correlation and Restricted Range

Whenever a correlation is computed from scores that do not represent the full range of possible values, you should be cautious in interpreting the correlation. To be safe, you should not generalize any correlation beyond the range of data represented in the sample. For a correlation to provide an accurate description for the general population, there should be a wide range of X and Y values in the data.

Which of the following situations is an example of a dichotomous variable and would therefore suggest the possible use of a point-biserial correlation.

Whether a computer is running the latest version of an operating system or an earlier version

Scores for a particular variable are typically represented by these letters:

X and Y

α

alpha value

Degrees of Freedom

df

An analysis of variance is used to evaluate the mean differences for a research study comparing 4 treatment conditions and 7 scores in each sample. How many total degrees of freedom are there?

27

ANOVA is to be used in a research study using two therapy groups. For each group, scores will be taken before the therapy, right after the therapy, and one year after the therapy. How many different sample means will there be?

6

For a posttest following ANOVA, there are four different treatment groups. How many pairwise comparisons must be made to gain a complete understanding of which treatment effects differ significantly from others?

6

For which test is df not related to the sample size?

A chi-square test

Correlations: Measuring and Describing Relationships

A correlation is a statistical method used to measure and describe the relationship between two variables. A relationship exists when changes in one variable tend to be accompanied by consistent and predictable changes in the other variable. A correlation typically evaluates three aspects of the relationship: The direction- In a positive relationship, the two variables tend to change in the same direction. This is indicated by a positive correlation coefficient; In a negative relationship, the two variables tend to go in opposite directions. This inverse relationship is indicated by a negative correlation coefficient. The form- The most common use of correlation is to measure straight-line relationships; However, other forms of relationships do exist and there are special correlations used to measure them. The strength- The correlation measures the consistency of the relationship.

Correlation and the Strength of the Relationship

A correlation measures the degree of relationship between two variables on a scale from 0-1.00. Although this number provides a measure of the degree of relationship, many researchers prefer to square the correlation and use the resulting value to measure the strength of the relationship. The value r2 is called the coefficient of determination because it measures the proportion of variability in one variable that can be determined from the relationship with the other variable. When there is a less-than-perfect correlation between two variables, extreme scores (high or low) for one variable tend to be paired with the less extreme scores (more toward the mean) on the second variable. This fact is called regression toward the mean.

Analysis of Variance (ANOVA)

A hypothesis-testing procedure that is used to evaluate mean differences between two or more treatments (or populations).

Independent-Measures Research Design or a Between-Subjects Design

A research design that uses a separate group of participants for each treatment condition (or for each population).

Reporting the Results of an Independent-Measures t Test

A research report typically presents the descriptive statistics followed by the results of the hypothesis test and measures of effect size (inferential statistics). You should note that standard deviation is not a step in the computations for the independent-measures t test, yet it is useful when providing descriptive statistics for each treatment group. It is easily computed when doing the t test because you need SS and df for both groups to determine the pooled variance. Note that the format for reporting t is exactly the same as reporting a t test and that the measure of effect size is reported immediately after the results of the hypothesis test. Example: The students who were tested in a dimly lit room reported higher performance scores (M= 12, SD= 2.93) than the students who were tested in the well-lit room (M= 8, SD= 3.07). The mean difference was significant, t(14)= 2.67, p< 0.05, d= 1.33.

Measuring Effect Size for ANOVA

A significant mean difference simply indicates that the difference observed in the sample data is very unlikely to have occurred just by chance. Thus, the term significant does not necessarily mean large, it simply means larger than expected by chance. To provide an indication of how large the effect actually is, it is recommended that researchers report a measure of effect size in addition to the measure of significance. For ANOVA, the simplest and most direct way to measure effect size is to compute the percentage of variance accounted for by the treatment conditions. The calculation and the concept of the percentage of variance is extremely straightforward. Specifically, we determine how much of the total SS is accounted for by the SSbetween treatments = SSbetween treatments/ SStotal.

Which of the following is an application of correlations?

All of the above: Prediction, Validity, and Reliability

Which of the following would be a reason for transforming scores into categories and using a nonparametric test?

All of the above: The original scores may violate some of the basic assumptions that underlie certain statistical procedures such as a normally distributed population; The original scores may have unusually high variance; The experiment produces an undetermined, or infinite, score.

Parametric and Nonparametric Tests

All the statistical tests we have examined thus far are designed to test hypotheses about specific population parameters. Because these tests all concern parameters and require assumptions about parameters, they are called parametric tests. Another general characteristic of parametric tests is that they require a numerical score for each individual in the sample. Often, researchers are confronted with experimental situations that do not conform to the requirements of parametric tests. In these situations, it may not be appropriate to use a parametric test. When the assumptions of a test are violated, the test may lead to an erroneous interpretation. There are several hypothesis-testing techniques that provide alternatives to parametric tests. These alternatives are called nonparametric tests. Occasionally, you have a choice between using a parametric and a nonparametric test. In most situations, the parametric test is preferred because it is more likely to detect a real difference or a real relationship. However, there are situations for which transforming scores into categories might be a better choice. It may be simpler to obtain category measurements. The original scores may violate some of the basic assumptions that underlie certain statistical procedures. The original scores may have unusually high variance. Occasionally, an experiment produces an undetermined, or infinite, score when, for example, a participant fails to solve a problem. Although there is no absolute number that can be assigned, you can say that the participant is in the highest category, and then classify the other scores according to their numerical values.

An Overview of Analysis of Variance

Analysis of variance (ANOVA) is a hypothesis-testing procedure that is used to evaluate mean differences between two or more treatments (or populations). As with all inferential procedures, ANOVA uses sample data as the basis for drawing general conclusions about populations. The major advantage of ANOVA is that it can be used to compare two or more treatments. -Specifically, we must decide between two interpretations: 1. There really are no differences between the populations (or treatments). The observed differences between the sample means are caused by random, unsystematic factors (sampling error). 2. The populations (or treatments) really do have different means, and these population mean differences are responsible for causing systematic differences between the sample means.

ANOVA Notation and Formulas

Because ANOVA typically is used to examine data from more than two treatment conditions (and more than two samples), we need a notational system to keep track of all the individual scores and totals. The letter k is used to identify the number of treatment conditions—that is, the number of levels of the factor. For an independent-measures study, k also specifies the number of separate samples. The number of scores in each treatment is identified by a lowercase letter n. The total number of scores in the entire study is specified by a capital letter N. The sum of the scores (ΣX) for each treatment condition is identified by the capital letter T (for treatment total). The sum of all the scores in the research study (the grand total) is identified by G.

The Scheffè Test

Because it uses an extremely cautious method for reducing the risk of a Type I error, the Scheffé test has the distinction of being one of the safest of all possible post hoc tests (smallest risk of a Type I error). The Scheffé test uses an F-ratio to evaluate the significance of the difference between any two treatment conditions. The numerator of the F-ratio is an MS between treatments that is calculated using only the two treatments you want to compare. The denominator is the same MSwithin that was used for the overall ANOVA. The "safety factor" for the Scheffé test comes from the following two considerations: 1. Although you are comparing only two treatments, the Scheffé test uses the value of k from the original experiment to compute df between treatments. Thus, df for the numerator of the F-ratio is k - 1. 2. The critical value for the Scheffé F-ratio is the same as was used to evaluate the F-ratio from the overall ANOVA. Thus, Scheffé requires that every posttest satisfy the same criterion that was used for the complete ANOVA.

Which of the following will increase the likelihood of rejecting the null hypothesis using ANOVA?

Both A and B: A decrease of SSwithin; and An increase in the sample sizes.

What can be used to conduct the hypothesis test for the Pearson correlation?

Both A and B: A t statistic; and An F-ratio

Using and Interpreting the Pearson Correlation

Correlations are used in a number of situations. Prediction: If two variables are known to be related in some systematic way, it is possible to use one of the variables to make accurate predictions about the other. Validity: One common technique for demonstrating validity is to use a correlation. For example, if a new test actually measures intelligence, then the scores on the test should be related to other measures of intelligence. Reliability: In addition to evaluating the validity of a measurement procedure, correlations are used to determine reliability. A measurement procedure is considered reliable to the extent that it produces stable, consistent measurements. Theory verification: Many psychological theories make specific predictions about the relationship between two variables. Such predictions can be tested by determining the correlation between the two variables. The statistical significance of the Pearson correlation can be found by referring to a table of critical values. When you encounter correlations, there are four additional considerations that you should bear in mind. 1. Correlation simply describes a relationship between two variables. a. It does not explain why the two variables are related. Specifically, a correlation should not and cannot be interpreted as proof of a cause-and-effect relationship between the two variables. 2. The value of a correlation can be affected greatly by the range of scores represented in the data. 3. One or two extreme data points, often called outliers, can have a dramatic effect on the value of a correlation. 4. When judging how "good" a relationship is, it is tempting to focus on the numerical value of the correlation. However, a correlation should not be interpreted as a proportion. a. To describe how accurately one variable predicts the other, you must square the correlation. Thus, a correlation of r = .5 means that one variable partially predicts the other, but the predictable portion is only r2 = .52 = 0.25 (or 25%) of the total variability.

Type I Errors and Multiple-Hypothesis Tests

Each time you do a hypothesis test, you select an alpha level that determines the risk of a Type I error. Often a single experiment requires several hypothesis tests to evaluate all the mean differences. However, each test has a risk of a Type I error, and the more tests you do, the greater the risk. For this reason, researchers often make a distinction between the testwise alpha level and the experimentwise alpha level. The testwise alpha level is the risk of a Type I error, or alpha level, for an individual hypothesis test. When an experiment involves several different hypothesis tests, the experimentwise alpha level is the total probability of a Type I error that is accumulated from all of the individual tests in the experiment.

Reporting the Results for Chi-Square

Example: The participants showed significant preferences among the four orientations for hanging the painting, χ^2(3, n= 50)= 8.08, p< 0.05. Note that the form of the report is similar to that of other statistical tests. Degrees of freedom are indicated in parentheses following the chi-square symbol. Also contained in the parentheses is the sample size (n). This additional information is important because the degrees of freedom is based on the number of categories (C), not sample size. Next, the calculated value of chi-square is presented, followed by the probability that a Type 1 error has been committed. Because we obtained an extreme, very unlikely value for the chi-square statistic, the probability is reported as less than the alpha level. Additionally, the report may provide the observed frequencies (fo) for each category. This information may be presented in a simple sentence or in a table.

Which of the following is not a correct interpretation of the F-ratio in ANOVA testing?

F= variance between treatments/ total standard error

Reporting the Results of a t Test

First, recall that a scientific report typically uses the term significant to indicate that the null hypothesis has been rejected and the term not significant to indicate failure to reject the null hypothesis. Additionally, there is a prescribed format for reporting the calculated value of the test statistic, degrees of freedom, and alpha level for a t test. The first statement reports the descriptive statistics, the mean and the standard deviation, as previously described. the next statement provides the results of the inferential statistical analysis. Note that the degrees of freedom are reported in parentheses immediately after the symbol t. The value for the obtained t statistic follows, and next is the probability of committing a Type 1 error. Finally, the effect size is reported. Example: The infants spent an average of M= 13 out of 20 seconds looking at the attractive face, with SD= 3.00. Statistical analysis indicates that the time spent looking at the attractive face was significantly greater than would be expected if there were no preference, t(8)= 3.00, p< 0.05, r^2= 0.5294.

The F Distribution Table

For ANOVA, we expect F near 1.00 if H0 is true. An F-ratio that is much larger than 1.00 is an indication that H0 is not true. In the F distribution, we need to separate those values that are reasonably near 1.00 from the values that are significantly greater than 1.00. These critical values are presented in an F distribution table. To use the table, you must know the df values for the F-ratio (numerator and denominator), and you must know the alpha level for the hypothesis test. It is customary for an F table to have the df values for the numerator of the F-ratio printed across the top of the table. The df values for the denominator of F are printed in a column on the left-hand side.

The F-Ratio: The Test Statistic for ANOVA

For the independent-measures ANOVA, the F-ratio has the following structure: F-ratio = variance between treatments/ variance within treatments The value obtained for the F-ratio helps determine whether any treatment effects exist. When there are no systematic treatment effects, the differences between treatments (numerator) are entirely caused by random, unsystematic factors. When the treatment does have an effect, then the combination of systematic and random differences in the numerator should be larger than the random differences alone in the denominator. For ANOVA, the denominator of the F-ratio is called the error term. The error term provides a measure of the variance caused by random, unsystematic differences. When the treatment effect is zero (H0 is true), the error term measures the same sources of variance as the numerator of the F-ratio, so the value of the F-ratio is expected to be nearly equal to 1.00.

The Distribution of F-Ratios

In analysis of variance, the F-ratio is constructed so that the numerator and denominator of the ratio are measuring exactly the same variance when the null hypothesis is true. In this situation, we expect the value of F to be around 1.00. If the null hypothesis is false, the F-ratio should be much greater than 1.00. The problem is to define precisely which values are "around 1.00" and which are "much greater." To answer this question, we need to look at all the possible F values that can be obtained when the null hypothesis is true—that is, the distribution of F-ratios. Before we examine this distribution in detail, you should note two obvious characteristics: 1. Because F-ratios are computed from two variances (the numerator and denominator of the ratio), F values always are positive numbers. Variance is always positive. 2. When H0 is true, the numerator and denominator of the F-ratio are measuring the same variance. In this case, the two sample variances should be about the same size, so the ratio should be near 1. In other words, the distribution of F-ratios should pile up around 1.00. With these two factors in mind, we can sketch the distribution of F-ratios. The distribution is cut off at zero (all positive values), piles up around 1.00, and then tapers off to the right. The exact shape of the F distribution depends on the degrees of freedom for the two variances in the F-ratio.

Terminology in Analysis of Variance

In analysis of variance, the variable (independent or quasi-independent) that designates the groups being compared is called a factor. The individual conditions or values that make up a factor are called the levels of the factor. A study that combines two factors is called a two-factor design or a factorial design.

Analysis of Degrees of Freedom (df)

In computing the degrees of freedom, there are two important considerations to keep in mind: 1. Each df value is associated with a specific SS value. 2. Normally, the value of df is obtained by counting the number of items that were used to calculate SS and then subtracting 1. For example, if you compute SS for a set of n scores, then df = n - 1: 1. To find the df associated with SStotal, you must first recall that this SS value measures variability for the entire set of N scores. Therefore, the df value is dftotal = N - 1 2. To find the df associated with SSwithin, we must look at how this SS value is computed. Remember, we first find SS inside of each of the treatments and then add these values together. Each of the treatment SS values measures variability for the n scores in the treatment, so each SS has df = n - 1. When all these individual treatment values are added together, we obtain: dfwithin = Σ(n - 1) = Σdf in each treatment. Notice that the formula for dfwithin simply adds up the number of scores in each treatment (the n values) and subtracts 1 for each treatment. If these two stages are done separately, you obtain: dfwithin = N - k. 3. The df associated with SSbetween can be found by considering how the SS value is obtained. This SS formula measures the variability for the set of treatments (totals or means). To find dfbetween, simply count the number of treatments and subtract 1. Because the number of treatments is specified by the letter k, the formula for df is: dfbetween = k - 1.

Statistical Hypotheses for ANOVA

In general, H0 states that there is no treatment effect. In an ANOVA with three groups H0 could appear as: H0: μ1 = μ2 = μ3 The alternative hypothesis states that the population means are not all the same: H1: There is at least one mean difference.

Alternatives to the Pearson Correlation

In this case, the independent-measures t test can be used to evaluate the mean difference between groups. If the effect size for the mean difference is measured by computing r2 (the percentage of variance explained), the value of r2 will be equal to the value obtained by squaring the point-biserial correlation.

Within-Treatments Variance

Inside each treatment condition, we have a set of individuals who receive the same treatment. The researcher does not do anything that would cause these individuals to have different scores, yet they usually do have different scores. The differences represent random and unsystematic differences that occur when there are no treatment effects. Thus, the within-treatments variance provides a measure of how big the differences are when H0 is true.

Critical Region

Is composed of the extreme sample values that are very unlikely (as defined by the alpha level) to be obtained if the null hypothesis is true.

Repeated-Measures Design or a Within-Subject Design

Is one in which the dependent variable is measured two or more times for each individual in a single sample. The same group of subjects is used in all of the treatment conditions. Have to find the difference between the tests.

T Distribution

Is the complete set of t values computed for every possible random sample for a specific sample size (n) or a specific degrees of freedom (df). It approximates the shape of a normal distribution.

Median for a Distribution

Is the midpoint of the list when scores are listed from smallest to largest, the point on thee measurement scale below which 50% of the scores in the distribution are located.

r^2

Is the percentage of variance accounted for by the treatment.

Mode for a Distribution

Is the score or category that has the greatest frequency.

σ

Is the standard deviation for a population.

s

Is the standard deviation for a sample.

Mean for a Distribution

Is the sum of the scores divided by the number of scores.

s^2

Is the variance for a sample.

The Greek letter sigma, ∑

Is used to stand for summation.

T Statistic

Is used to test hypothesis about an unknown population mean, μ, when the value of standard deviation, σ, is unknown. The formula for this has the same structure as the z-score formula., except that it uses the estimated standard error in the denominator.

Reporting the Results of Analysis of Variance

It begins with a presentation of the treatment means and standard deviations in the narrative of the article, a table, or a graph. These descriptive statistics are not needed in the calculations of the actual ANOVA, but you can easily determine the treatment means from n and t (M= T/n) and the standard deviations from the SS values for each treatment. Next, report the results of the ANOVA. Example: The means and standard deviations are presented in the source table. The analysis of variance indicates that there are significant differences among the three strategies for studying, F(2, 15)= 7.16, p< 0.05, η^2= 0.4888.

What is the main advantage that ANOVA testing has compared with t testing?

It can be used to compare two or more treatments.

Reporting the Results of a Repeated-Measures t Test

It consists of a concise statement that incorporates the t value, degrees of freedom, and alpha level. One typically includes values of means and standard deviations. We also include in a statement the measured effect size by computing the percentage of variance explained and obtained r^2. Example: Changing from a neutral word to a swear word reduced the perceived level of pain by an average of M= 2.00 points with SD= 2.00. The treatment effect was statistically significant, t(8)= -3.00, p< 0.05, r^2= 0.529.

The Phi-Coefficient and Cramér's V

It is possible to compute the correlation phi (ϕ) in addition to the chi-square hypothesis test for the same set of data. Because phi is a correlation, it measures the strength of the relationship and thus provides a measure of effect size. When the chi-square test involves a matrix larger than 2 × 2, a modification of the phi-coefficient, known as Cramér's V, can be used to measure effect size. For Cramér's V, the value of df* is the smaller of either (R - 1) or (C - 1).

ANOVA Summary Tables

It is useful to organize the results of the analysis in one table called an ANOVA summary table. The table shows the source of variability (between treatments, within treatments, and total variability), SS, df, MS, and F. Although these tables are no longer used in published reports, they are a common part of computer printouts, and they do provide a concise method for presenting the results of an analysis.

Partial Correlations

Occasionally a researcher may suspect that the relationship between two variables is being distorted by the influence of a third variable. A partial correlation measures the relationship between two variables while controlling the influence of a third variable by holding it constant. In a situation with three variables, X, Y, and Z, it is possible to compute three individual Pearson correlations: 1. rXY measuring the correlation between X and Y; 2. rXZ measuring the correlation between X and Z; 3. rYZ measuring the correlation between Y and Z. These three individual correlations can then be used to compute a partial correlation. For example, the partial correlation between X and Y, holding Z constant, is determined by the formula.

Correlation and Causation

One of the most common errors in interpreting correlations is to assume that a correlation necessarily implies a cause-and-effect relationship between the two variables. Although there may be a causal relationship, the existence of a correlation does not prove it. To establish a cause-and-effect relationship, it is necessary to conduct a true experiment in which one variable is manipulated and other variables are rigorously controlled.

Post Hoc Tests

Post hoc tests (or posttests) are additional hypothesis tests that are done after an ANOVA to determine exactly which mean differences are significant and which are not. In general, a post hoc test enables you to go back through the data and compare the individual treatments two at a time. In statistical terms, this is called making pairwise comparisons. The process of conducting pairwise comparisons involves performing a series of separate hypothesis tests, and each of these tests includes the risk of a Type I error. As you do more and more separate tests, the risk of a Type I error accumulates and is called the experimentwise alpha level.

Sum of Squares is represented as:

SS

Reporting the Results of the Statistical Test

Say if there was a significant effect in the experiment then state the z-score obtained and state the probability of the alpha value. Example: Wearing a red shirt had a significant effect on the size of the tips left by male customers, z= 2.25, p<0.05.

z-score

Specifies the precise location of each X value within a distribution. The sign signifies whether the score is above the mean (positive) or below the mean (negative). The numerical value specifies the distance from the mean by counting the number of standard deviations between X and μ.

Null Hypothesis

States that in the general population there is no change, no difference, or no relationship. In the context of an experiment, it predicts that the independent variable (treatment) has no effect on the dependent variable (scores) for the population.

Alternative Hypothesis

States that there is a change, a difference, or a relationship for the general population. In the context of an experiment, it predicts that the independent variable (treatment) does have an effect on the dependent variable.

Steps for Hypothesis Testing

Step 1: State the hypotheses. Step 2: Locate critical regions. Step 3: Calculate the data. Step 4: Make a decision.

ΣX

Sum of the scores

Testing the Significance of the Spearman Correlation

Testing a hypothesis for the Spearman correlation is similar to the procedure used for the Pearson r. The basic question is whether a correlation exists in the population. The null hypothesis states that there is no correlation (no monotonic relationship) between the variables for the population, or in symbols: H0: ρS = 0. The alternative hypothesis predicts that a nonzero correlation exists in the population, which can be stated in symbols as- H1 ≠ ρS 0. To determine whether the Spearman correlation is statistically significant, use a table of critical values.

Effect Size for the Chi-Square Tests

Tests of significance are influenced not only by the size or strength of the treatment effects but also by the size of the samples. As a result, even a small effect can be statistically significant if it is observed in a very large sample. Because a significant effect does not necessarily mean a large effect, it is generally recommended that the outcome of a hypothesis test be accompanied by a measure of the effect size.

If the variance between treatments increases and the variance within treatments decreases, what will happen to the F-ratios and the likelihood of rejecting the null hypothesis in an ANOVA test?

The F-ratio and the likelihood of rejecting the null hypothesis will increase.

The Pearson Correlation

The Pearson correlation measures the direction and degree (strength) of the linear relationship between two variables. To compute the Pearson correlation, you first measure the variability of X and Y scores separately by computing SS for the scores of each variable (SSX and SSY). Then, the covariability (tendency for X and Y to vary together) is measured by the sum of products (SP). Thus, the Pearson correlation is comparing the amount of covariability (variation from the relationship between X and Y) to the amount X and Y vary separately. The magnitude of the Pearson correlation ranges from 0 (indicating no linear relationship between X and Y) to 1.00 (indicating a perfect straight-line relationship between X and Y). The correlation can be either positive or negative depending on the direction of the relationship.

What is the difference between the Pearson correlation and the Spearman correlation?

The Spearman correlation is the same as the Pearson correlation, but it is used on data from an ordinal scale.

Spearman Correlation

The Spearman correlation is used in two general situations: It measures the relationship between two ordinal variables; that is, X and Y both consist of ranks. It measures the consistency of direction of the relationship between two variables. The Pearson correlation measures the degree of linear relationship between two variables. However, a researcher often expects the data to show a consistently one-directional relationship but not necessarily a linear relationship. The calculation of the Spearman correlation requires: 1. Two variables are observed for each individual. 2. The observations for each variable are rank ordered. Note that the X values and the Y values are ranked separately. 3. After the variables have been ranked, the Spearman correlation is computed by either: a) Using the Pearson formula with the ranked data; b) Using the special Spearman formula (assuming there are few, if any, tied ranks). When you are converting scores into ranks for the Spearman correlation, you may encounter two (or more) identical scores. Whenever two scores have exactly the same value, their ranks should also be the same. This is accomplished by the following procedure: 1. List the scores in order from smallest to largest. Include tied values in the list. 2. Assign a rank (first, second, etc.) to each position in the ordered list. 3. When two (or more) scores are tied, compute the mean of their ranked positions, and assign this mean value as the final rank for each score. Because calculations with ranks can be simplified, these simplifications can be incorporated into the final calculations for the Spearman correlation. Instead of using the Pearson formula after ranking the data, you can put the ranks directly into a simplified formula where D is the difference between the X rank and the Y rank for each individual.

Hypothesis Tests with the Pearson Correlation

The basic question for this hypothesis test is whether a correlation exists in the population. The null hypothesis is "No. There is no correlation in the population." or "The population correlation is zero." The alternative hypothesis is "Yes. There is a real, nonzero correlation in the population." Because the population correlation is traditionally represented by ρ (the Greek letter rho), these hypotheses would be stated in symbols as- H0: ρ = 0; H1: ρ ≠ 0. When there is a specific prediction about the direction of the correlation, it is possible to do a directional, or one-tailed test. For example, if a researcher is predicting a positive relationship, the hypotheses would be- H0: ρ < 0; H1: ρ > 0. The hypothesis test evaluating the significance of a correlation can be conducted using either a t statistic or an F-ratio. The t statistic for a correlation has the same general structure as other t statistics: t = sample statistic - population parameter/standard error In this case, the sample statistic is the sample correlation (r) and the corresponding parameter is the population correlation (ρ). The null hypothesis specifies that the population correlation is ρ = 0. The t statistic has degrees of freedom defined by df = n - 2.

Between-Treatments Variance

The between-treatments variance simply measures how much difference exists between the treatment conditions. There are two possible explanations for these between-treatment differences: 1. The differences between treatments are not caused by any treatment effect but are simply the naturally occurring, random and unsystematic differences that exist between one sample and another. That is, the differences are the result of sampling error. 2. The differences between treatments have been caused by the treatment effects. Thus, when we compute the between-treatments variance, we are measuring differences that could be caused by a systematic treatment effect or could simply be random and unsystematic mean differences caused by sampling error.

The Chi-Square Test for Goodness-of-Fit

The chi-square test for goodness of fit uses sample data to test hypotheses about the shape or proportions of a population distribution. The test determines how well the obtained sample proportions fit the population proportions specified by the null hypothesis. The null hypothesis specifies the proportion of the population that should be in each category. 1. The null hypothesis often states that there is no preference among the different categories. In this case, H0 states that the population is divided equally among the categories. 2. The null hypothesis can state that the proportions for one population are not different from the proportions than are known to exist for another population. The data for a chi-square test are remarkably simple. You just select a sample of n individuals and count how many are in each category. The resulting values are called observed frequencies. The symbol for observed frequency is fo. The general goal of the chi-square test for goodness of fit is to compare the data (the observed frequencies) with the null hypothesis. The problem is to determine how well the data fit the distribution specified in H0. The first step in the chi-square test is to construct a hypothetical sample that represents how the sample distribution would look if it were in perfect agreement with the proportions stated in the null hypothesis. The expected frequency for each category is the frequency value that is predicted from the proportions in the null hypothesis and the sample size (n). The expected frequencies define an ideal, hypothetical sample distribution that would be obtained if the sample proportions were in perfect agreement with the proportions specified in the null hypothesis. The chi-square statistic simply measures how well the data (fo) fit the hypothesis (fe). The symbol for the chi-square statistic is x2. As the formula indicates, the value of chi-square is computed by the following steps. 1. Find the difference between fo (the data) and fe (the hypothesis) for each category. 2. Square the difference. This ensures that all values are positive. 3. Next, divide the squared difference by fe. Finally, sum the values from all the categories.

The Logic of Analysis of Variance

The formulas and calculations required in ANOVA are somewhat complicated, but the logic that underlies the whole procedure is fairly straightforward. The analysis process divides the total variability into two basic components. Between-treatments variance Within-treatment variance

Assumptions for the Independent-Measures ANOVA

The independent-measures ANOVA requires the same three assumptions that were necessary for the independent-measures t hypothesis test: 1. The observations within each sample must be independent. 2. The populations from which the samples are selected must be normal. 3. The populations from which the samples are selected must have equal variances (homogeneity of variance). Ordinarily, researchers are not overly concerned with the assumption of normality, especially when large samples are used, unless there are strong reasons to suspect the assumption has not been satisfied. The assumption of homogeneity of variance is an important one. If a researcher suspects it has been violated, it can be tested by Hartley's F-max test for homogeneity of variance. If you suspect that one of the assumptions for the independent-measures ANOVA has been violated, you can still proceed by transforming the original scores to ranks and then using an alternative statistical analysis known as the Kruskal-Wallis test, which is designed specifically for ordinal data.

Which of the following is not measured and described by a correlation?

The mean difference of a relationship

Dependent Variable

The one that is observed to assess the effect of the treatment.

The Phi-Coefficient

The phi-coefficient is used when both variables are dichotomous. The calculation proceeds as follows: Convert each of the dichotomous variables to numerical values by assigning a 0 to one category and a 1 to the other category for each of the variables. Use the regular Pearson formula with the converted scores.

The Point-Biserial Correlation

The point-biserial correlation is used to measure the relationship between two variables in situations in which one variable consists of regular, numerical scores, but the second variable has only two values. A variable with only two values is called a dichotomous variable or a binomial variable. The calculation of the point-biserial correlation proceeds as follows: Assign numerical values to the two categories of the dichotomous variable(s). Traditionally, one category is assigned a value of 0 and the other is assigned a value of 1. Use the regular Pearson correlation formula to calculate the correlation. The point-biserial correlation is closely related to the independent-measures t test. When the data consists of one dichotomous variable and one numerical variable, the dichotomous variable can also be used to separate the individuals into two groups. Then, it is possible to compute a sample mean for the numerical scores in each group.

Which of the following most accurately describes the F-ratio in ANOVA testing?

The ratio of variances

What does the chi-square test for independence evaluate?

The relationship between two variables in the population

Reporting Correlations

The report should include the sample size, the calculated value for the correlation, whether it is a statistically significant relationship, the probability level, and the type of test used (one- or two-tailed). Example: A correlation for the data revealed a significant relationship between amount of education and annual income, r= +0.65, n= 30, p< 0.01, two tails.

One-Tailed Test

The statistical hypotheses specify either an increase or a decrease in the population mean. That is, they make a statement about the direction of the effect. It allows you to reject the null hypothesis when the difference between the sample and the population is relatively small, provided the difference is in the specified direction.

Independent Variable

The variable that is manipulated by the researcher.

In an analysis of variance, the primary effect of large mean differences within each sample is to increase the value for ______.

The variance within treatments

The expression ∑X means:

To add all the scores for the variable X.

The Chi-Square Distribution and Degrees of Freedom

To decide whether a particular chi-square value is "large" or "small," we must refer to a chi-square distribution. This distribution is the set of chi-square values for all the possible random samples when H0 is true. The chi-square distribution is a theoretical distribution with well-defined characteristics. The formula for chi-square involves adding squared values, so you can never obtain a negative value. Thus, all chi-square values are zero or larger. When H0 is true, you expect the data (fo values) to be close to the hypothesis (fe values). Thus, we expect chi-square values to be small when H0 is true. These two factors suggest that the typical chi-square distribution will be positively skewed. For the goodness-of-fit test, the degrees of freedom are determined by, df = C - 1, where C is the number of categories.

Tukey's Honestly Significant Difference (HSD) Test

Tukey's test allows you to compute a single value that determines the minimum difference between treatment means that is necessary for significance. This value, called the honestly significant difference, or HSD, is then used to compare any two treatment conditions. If the mean difference exceeds Tukey's HSD, you conclude that there is a significant difference between the treatments. Otherwise, you cannot conclude that the treatments are significantly different.

k

Used to identify the number of treatment conditions-that is, the number of levels of the factor.

The Relationship between ANOVA and t Tests

When you are evaluating the mean difference from an independent-measures study comparing only two treatments (two separate samples), you can use either an independent-measures t test or the ANOVA. The basic relationship between t statistics and F-ratios can be stated in an equation: F = t^2 There are several other points to consider in comparing the t statistic to the F-ratio. You will be testing the same hypotheses whether you choose a t test or an ANOVA. H0: μ1 = μ2; H1: μ1 ≠ μ2. The degrees of freedom for the t statistic and the df for the denominator of the F-ratio (dfwithin) are identical. The distribution of t and the distribution of F-ratios match perfectly if you take into consideration the relationship F = t^2.

Which statistic does the Median Test for Independent Samples rely on?

chi-square

Standard Deviation

σ or s


Kaugnay na mga set ng pag-aaral

Physics 19, 20, 21 Clicker set 1

View Set

"Declaration of Sentiments" (With "Giving Women the Vote")

View Set

Chapter 3: Understanding Diverse Literacy Needs

View Set