psych stats exam 4 textbook notes

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Bivariate distributions

-Joint distribution of two variables; scores are paired -A bivariate distribution may show positive correlation, negative correlation, or zero correlation

Scatterplots

-The thinner the envelope around the points in a scatterplot, the larger the correlation. The closer the points are to the regression line, the greater the correlation coefficient

Wilcoxon Signed-Rank t test

-The Wilcoxon test is appropriate for testing the difference between two paired samples -The result of the test is a T value, which is interpreted using critical values from table J

When you may use chi square

-1. Use chi square when the subjects are identified by category rather than with a quantitative score. With chi square, the raw data are the number of subjects in each category -2. To use chi square, each observation must be independent of other observations. (Independence here means independence of the observations and not independence of the variables.) In practical terms, independence means that each subject appears only once in the table (no repeated measures on one subject) and that the response of one subject is not influenced by the response of another subject -3. Chi square conclusions apply to populations of which the samples are representative. Random sampling is one way to ensure representativeness.

Chi square as a test for goodness of fit

-A chi square goodness-of-fit test allows an evaluation of a theory and its ability to predict outcomes -The formula is the same as that for a test of independence -Thus, both chi square tests require observed values and expected values -The expected value for a goodness-of-fit test, however, come from a hypothesis, theory, or model, rather than from calculations on the data themselves as in tests of independence -With a goodness-of-fit test, you can determine whether or not there is a good fit between the theory and the data -In a chi square goodness-of-fit test, the null hypothesis is that the actual data fit the expected data -A rejected H0 means that the data do not fit the model—that is, the model is inadequate -A retained H0 means that the data are not at odds with the model -Degrees of freedom -The term degrees of freedom implies that there are some restrictions -For both tests of independence and goodness of fit, one restriction is always that the sum of the expected events must be equal to the sum of the observed events; that is sum E=sum O -No effect size index

The appearance of regression lines

-The appearance of regression lines depends not only on the calculated values of a and b but also on the units chosen for the x and y axes and whether there are breaks in the axes -Basically, you cannot necessarily determine a and b by looking at a graph -Every scatterplot has two regression lines. One is called the regression of Y onto X, which is what this chapter showed -The other is the regression of X onto Y

Strong relationships but low correlation coefficients

-A small correlation coefficient does not always mean there is no relationship between the two variables -Nonlinearity -For r to be a meaningful statistic, the best-fitting line through the scatterplot of points must be a straight line -If a curved line fits the data better than a straight line, r will be low, not reflecting the true relationship between the two variables -For curved relationships, researchers often measure the strength of the association with the statistic eta or by calculating the formula for a curve that fits the data -Truncated range -Spuriously low r values can occur when the range of scores in the sample is much smaller than the range of scores in the population (a truncated range)

Partial correlation

-A technique called partial correlation allows you to separate or partial out the effects of one variable from the correlation of two variables -For example, if you want to know the true correlation between achievement test scores in two school subjects, it is probably necessary to partial out the effects of intelligence because cognitive ability and achievement are correlated

Zero correlation

-A zero correlation means there is no linear relationship between two variables -High and low scores on the two variables are not associated in any predictable manner -When r=0, the regression line is a horizontal line at the height of Y-bar -This makes sense; if r=0, then your best estimate of Y for any value of X is Y-bar

Writing a regression equation

-After designating the Y variable, assemble the data (correlation coefficient, means, standard deviations) -Equations pg 119-120 -To actually measure the accuracy of predictions made from a regression analysis, you need the standard error of estimate: standard deviation of the differences between predicted outcomes and actual outcomes

Effect size indexes for 2x2 tables

-Although a chi squared test allows you to conclude that two variables are related, it doesn't tell you the degree of relationship -For that you need effect size index. There are two -Odds and odds ratio -Odds: the chance that one thing will happen rather than another -Calculated by dividing the number who do by the number who don't -Perhaps the statistic most widely used to express degrees of relationship in 2x2 contingency tables is the odds ratio -The odds ratio is simply one odds calculation divided by another -Odds of event A/odds of event B

Phi

-An effect size index for chi square -It is appropriate for 2x2 tables but not for larger tables -Formula: phi=square root of (X^2 value from a 2x2 table/N) -The interpretation of phi is much like that of a correlation coefficient, phi gives you the degree of relationship between the two variables in a chi square analysis -Small effect, phi=.10 -Medium effect, phi=.30 -Large effect, phi=.50

An important consideration

-Because the big question is, "what is the nature of the population?" you should keep in mind that you can make a type II error -A type II error is very likely when expected frequencies are small -To avoid type II error, use large Ns -The larger the N, the more power you have to detect any real differences. And the larger the N, the more confidence you have that the chi square p value is accurate

Combining categories

-Combining categories is a technique used to create larger Ns. Thus, combining categories reduces the probability of type II errors and increases assurance that the p value is accurate

Correlation and regression

-Correlation is a statistical technique that describes the direction and degree of relationship between two variables -In this chapter, will use regression to: draw the line that best fits the data and predicting a person's score on one variable when you know that person's score on a second, correlated variable -Ideas developed by Galton

Interpretations of r

-Effect size index for r -Cohen sought an effect size index for correlation coefficients -The formula for the effect size for r is r itself -Cohen's proposal was that small=.10, medium-.30, and large=.50 -In literature, lower third <.20, middle third .20 to .30, upper third >.30 -The proper adjective for a particular r depends on the kind of research being discussed

When df=1

-For years, the recommendation for a 2x2 table with 1 df and an expected frequency less than 5 was to apply -Yates's correction -The effect of Yates's correction is to reduce the size of the obtained chi square -According to current thinking, Yates's correction resulted in too many type II errors -Monte Carlo method: using computer programs to draw thousands of large random samples from known populations for which the null hypothesis is true -Each sample analyzed with a chi square, and the proportion of rejections of the null hypothesis (Type I errors) is calculated -Trend has been to show that the theoretical chi square distribution gives accurate conclusions even when the expected frequencies are considerably less than 5

Dichotomous variables

-If one of the variables is dichotomous (has only two values), than a biserial correlation (r sub b) or a point-biserial correlations 9r sub pb( is appropriate -Variables such as height (tall vs. short) or gender (male vs. female) are examples

Mann-Whitney U test for larger samples

-If one sample or both samples has 21 scores or more, the normal curve is used to assess U -Formula page 335

When D=0 and tied ranks

-In calculating r, you may get a D value of zero -A zero means that there is a perfect correspondence between the two ranks. If all differences were zero, the r sub s would be 1. Thus, differences of zero should not be dropped when calculating r sub s

Positive correlation

-In the case of a positive correlation between two variables, high numbers on one variable tend to be associated with high numbers on the other variable, and low numbers on one variable with low numbers on the other -A correlation coefficient of 1.00 is referred to as perfect correlation -Scatterplot: graph of the scores of a bivariate frequency distribution -Regression line: a line of best fit for a scatterplot -When there is a perfect correlation (r=1.00), all points fall exactly on the regression line -You can have a perfect correlation even if the paired numbers aren't the same -The only requirement for a perfect correlation is that the differences between pairs of scores are all the same -If they are the same, then all the points of a scatterplot lie on the regression line, correlation is perfect, and an exact prediction can be made

Two cautions about the test

-It is the differences that are ranked, not the scores themselves -A rank of 1 always goes to the smallest difference

Negative correlation

-N-effect, the finding that an increase in number of competitors goes with a decrease in competitive motivation and, thus, test scores -When a correlation is negative, increase in one variable are accompanied by decreases in the other variable (an inverse relationship) -With negative correlation, the regression line goes from the upper left corner of the graph to the lower right corner, such lines have a negative slope -In cases of perfect negative correlation also, all the data points of the scatterplot fall on the regression line -Although some correlation coefficients are positive and some are negative, one is not more valuable than the other. The algebraic sign simply tells you the direction of the relationship -The absolute size of r, however, tells you the degree of the relationship

Comparison of nonparametric to parametric tests

-Nonparametric tests are similar to parametric tests in that the hypothesis-testing logic is the same for both kinds of tests -Both tests yield the probability of the observed data when the null hypothesis is true -They are different in that parametric tests require the assumptions about the populations that are not required by nonparametric tests -For example, parametric tests such as t tests and ANOVA assume normally distributed populations with equal variances for accurate probabilities -Nonparametric tests do not assume the populations have these characteristics -The null hypothesis for nonparametric tests is that the population distributions are the same. The interpretation of a rejected null hypothesis may not be quite so clear-cut for a nonparametric test -Power: -If the populations being sampled from are normally distributed and have equal variances, then parametric tests are more powerful than nonparametric ones -If the populations are not normal or do not have equal variances, then it is less clear what to recommend -Scale of measurement -Currently most researchers do not use a scale of measurement criterion when deciding between nonparametric and parametric tests

Linear regression

-Of meaning of regression is a statistical technique that allows you to make predictions and draw a line of best fit for bivariate distributions -The word regression also refers to a phenomenon that occurs when an extreme group is tested a second time -Those who do well the first time can be expected to do worse the second time, those who do poorly the first time can be expected to do better the second time. -Called regression to the mean -Linear regression: a technique that lets you predict a specific score on one variable given a score on the second variable -To make specific predictions, you must calculate a regression equation

Uses of r

-Reliability of tests -Correlation coefficients are used to assess the reliability of measuring devices such as tests, questionnaires, and instruments -Reliability refers to consistency -A correlation coefficient between test and retest scores gives you the degree of agreement -An r of .80 or greater indicates adequate reliability -To establish causation—NOT! -A high correlation coefficient does not give you the kind of evidence that allows you to make cause-and-effect statements -A sizable correlation is a necessary but not a sufficient condition for establishing causality

Multiple correlation

-Several variables can be combined, and the resulting combination can be correlated with one variable -With this technique, multiple correlation, a more precise prediction can be made -Performance in school can be predicted better by using several measures of a person rather than one

Nonparametric tests

-Some kinds of data do not meet the assumptions that parametric tests such as t tests and ANOVA are based on -Nonparametric statistical tests (sometimes called distribution-free tests) provide correct values for the probability of a Type I error regardless of the nature of the populations the samples come from

Assigning ranks and tied scores

-Sometimes you may choose a nonparametric test for data that are not already in ranks. In such cases, you have to rank the scores -For the Mann-Whitney test, it doesn't make any difference whether you call the largest or the smallest score 1 -Ties are handled by giving all tied scores the same rank. -This rank is the mean of the ranks the tied scores would have if no ties had occurred. -Ties do not affect the value of U if they are in the same group. If several ties involve both groups, a correction factor may be advisable

Correlation of ranked data

-Spearman's name is attached to the coefficient that is used to show the degree of correlation between two sets of ranked data -He used the Greek letter rho as the symbol. Modern symbol for Spearman's statistic has become r sub s -Spearman r sub s is a special case of the Pearson product-moment correlation coefficient and is most often used when the number of pairs of scores is small -R sub s is a descriptive statistic -Calculation of r sub s -Formula page 349

Testing the significance of r sub s

-Table L in appendix C gives the critical values for r sub s for alpha levels of .05 and .01 when the numbers of pairs is 16 or fewer -If the obtained r sub s is equal to or greater than the value in table L, reject the null hypothesis. The null hypothesis is that the population correlation coefficient is zero. -Rather large correlations are required for significance. -As with r sub s, not much confidence can be placed in low or moderate correlation coefficients that are based on only a few pairs of scores -For samples larger than 16, test the significance of r sub s by using table A. Note, however, that table A requires df and not the number of pairs -The degrees of freedom for r sub s is N-2 ( number of pairs minus 2), which is the same formula used for r

Mann-Whitney U Test

-The Mann-Whitney U test is a nonparametric test for data from an independent-samples design -The Mann-Whitney test produces a statistic, U, that is evaluated by consulting the sampling distribution of U -When U is calculated from small samples (both samples have 20 or fewer scores), the sampling distribution is table H -If the number of scores in one of the samples is greater than 20, the normal curve is used to evaluate U -With larger samples, a z score is calculated

Other kinds of correlation coefficients

-The Pearson product-moment correlation coefficient is appropriate for measuring the degree of the relationship between two linearly related, continuous variables -Sometimes, however, the data do not consist of two linearly related, continuous variables -When the data are ranks rather than scores from a continuous variable, researchers calculate Spearman r sub s -If the relationship between two variables is curved rather than linear, then the correlation ratio eta gives the degree of association

Effect size index for r sub s

-The Spearman correlation coefficient functions just like the Pearson product-moment correlation coefficient, so its interpretation faces the same issues -Depending on the situation, a good outcome col=uld be a correlation of less than .1 or more than .9

The Chi square distribution and the Chi square test

-The chi square distribution is a theoretical distribution, just as t and F are -Like t and F, as the number of degrees of freedom increases, the shape of the chi square distribution changes -Positively skewed curve -x^2=sigma((O-E)^2/E)) -O=observed frequency -Count of actual events ina category -E=expected frequency -Theoretical frequency derived from the null hypothesis

Chi square with more than one degree of freedom

-The chi squared values you have found so far have been evaluated by comparing them to a chi square distribution with 1 df -In this section, the problems require chi square distributions with more than 1 df. Some are tests of independence and some are goodness-of-fit problems

Coefficient of determination

-The correlation coefficient is the basis of the coefficient of determination, which tells the proportion of variance that two variables in a bivariate distribution have in common -It is an estimate of common variance -The coefficient of determination is calculated by squaring r; it is always a positive value between 0 and 1 -What a coefficient of determination of .52 tells you is that 52% of the variance in the two sets of scores is common variance. But 48% is independent variance, that is, variance in one test that is not associated with variance in the other test -Common variance is often illustrated with two overlapping circles, each of which represents the total variance of one variable. The overlapping portion is the amount of common variance

Correlation coefficient

-The correlation coefficient is used in a wide variety of fields -It is so popular because it provides a quantitative answer to a very common question: what is the degree of relationship between _ and _? -The definition formula for the correlation coefficient r=(sigma(z sub x z sub y))/N -Where z sub x is a score for variable X -Z sub y is the corresponding z score for variable Y -N is the number of pairs of scores -As far as which variable to call X and which to call Y, it doesn't make any difference for correlation coefficients -With regression, however it may make a big difference -Correlation coefficients should be based on an "adequate" number of observations. The traditional rule-of-thumb definition of adequate is 30-50 -Small N problems allow you to spend your time on interpretation and understanding rather than number crunching -The correlation coefficient r is a sample statistic -The corresponding population parameter is symbolized by the (rho). Same formula

Chi square as a test of independence

-The most common use of chi square is to test the independence of two variables -The null hypothesis for a chi square test of independence is that the two variables are independent—that there is no relationship between the two -If the null hypothesis is rejected, you can conclude that the two variables are related and then tell how they are related -Opposite of independent is contingent, rejecting the null hypothesis supports the alternative hypothesis that the two variables are contingent -Expected values: -A chi square test of independence requires observed values and expected values -Expected frequencies are those that are expected if the null hypothesis is true -The formula for the expected value of a cell is its row total, multiplied by its column total, divided by N -Degrees of freedom -Every chi square value is accompanied by its df -To determine the df for any RxC table, use the formula (R-1)(C-1)

Wilcoxon-Wilcox multiple-comparisons test

-The next technique is for data from three or more independent groups -This method allows you to compare all possible pairs of groups, regardless of the number of groups in the experiment. This is the nonparametric equivalent of a one-way ANOVA followed by Tukey HSD tests -The Wilcoxon-WIlcox multiple-comparisons test allows you to compare all possible pairs of treatments, which is like having a Mann-Whitney test on each pair of treatments -However, this test keeps the alpha level at .05 or .02. -Regardless of number of pairs -This test requires independent samples -To create a Wilcoxon-Wilcox test: -Begin by ordering the scores from the K treatments into one overall ranking -Then, within each sample, add the ranks, which gives a sum R for each sample -Finally, for each pair of treatments, subtract one sum R from the other, which gives a difference -Finally, compare the absolute size of the difference to a critical value in table K -The rationale of the test is that when the null hypothesis is true, the various sum R values should be about the same -A large difference indicates that the two samples came from different populations. Of course, the larger K is, the greater the likelihood of large differences by chance alone, and this is taken into account in the sampling distribution that Table K is based on -The test can be used only when the Ns for all groups are equal. A common solution to the problem of unequal Ns is to reduce the too-large group(s) by dropping one or more randomly chosen scores -A better solution is to conduct the experiment so that you have equal Ns

The rationale of nonparametric tests

-The only part of the hypothesis-testing rationale that is unique to the tests in this chapter is that the sampling distributions are derived from ranks rather than continuous, quantitative scores

The rationale

-The rationale is that if there is no difference between the two populations, the absolute value of the negative sum should be equal to the positive sum, with all deviations being due to sampling fluctuations -Table J shows the critical values for both one and two tailed tests for several alpha levels. To enter the table, use N, the critical number of pairs of subjects -Reject H0 when the obtained T is equal to or less than the critical value in the table. Like the Mann-Whitney test, the WIlcoxon T must be equal to or less than the tabled critical value if you are to reject H0

The regression equation

-The regression equation is a formula for a straight line -It allows you to predict a value for Y, given a value of X -Y=a + bX -Where Y is Y value predicted for a particular X value -A is a point at which the regression line intersects the y-axis -B is the slope of the regression line -X is a value for which you wish to predict a Y value -Y is assigned to the variable in which you wish to predict -A and b are called regression coefficients -The regression coefficients a and b can have positive or negative values -A negative value a means that the line crosses the y-axis below the zero point -A negative value b means the line has a negative slope

Small expected frequencies

-The theoretical chi square distribution, which comes from a mathematical formula, is a continuous function that can have any positive numerical value -Chi square test statistics calculated from frequencies, however, change in discrete steps -When the expected frequencies are very small, the discrete steps are quite large -As the expected frequencies approach zero, the theoretical chi square distribution becomes a less and less reliable way to estimate probabilities -The fear is that such chi square analyses will reject the null hypothesis more often than warranted

Ties scores and D=0

-Ties among the D scores are handled in the usual way—that is, each tied score is assigned the mean of the ranks that would have been assigned if there had been no ties. -Ties do not affect the probability of the rank sum unless they are numerous (10% or more of the ranks are tied) -When there are numerous ties, the probabilities in table J associated with a given critical T value may be too large. In this case, the test is described as too conservative; that is, it fails to detect an actual difference in populations -When one of the D scores is zero, it is not assigned a rank and N is reduced by 1. When two of the D scores are tied at zero, each is given the average rank of 1.5. Each is kept in the computation; one is assigned a plus sign and the other a minus sign -If three D scores are zero, one is dropped, N is reduced by 1, and the remaining two are given signed ranks of +1.5 and -1.5. You can generalize from these three cases to situations with four, five, or more zeros

When df>1

-When df>1, the same uncertainty exists if one or more expected frequencies is small -Chi square test gives fairly accurate probabilities if total sample size is greater than 20

Chi Square

-When the data consist of categories and their frequency counts, the statistic needed is chi square -Chi square is the name of both a sampling distribution and a statistical test -A chi square test will give the probability of the data fathered if the null hypothesis of no difference is true

Wilcoxon signed-rank t test for larger samples

-When the number of pairs exceeds 50, the T statistic may be evaluated using the z test and the normal curve -Formula for test statistic page 343

Mann-Whitney U test for small samples

-With the U test, a small U value obtained from the data indicates a large difference between the samples -This is just the opposite from t tests, ANOVA, and chi square, in which large sample differences produce large values -Thus, for statistical significance with the Mann-Whitney U test, the obtained U value must be equal to or smaller than the critical value in table H -U=0 when the members of one sample all rank lower than every member of the other sample. Under such conditions, rejecting the null hypothesis seems reasonable

Finding T involves some steps that are different from finding U

1. Find a difference, D, for every pair of scores. The order of subtraction doesn't matter, but it must be the same for all pairs. 2. Using the absolute value for each difference, rank the differences. The rank of 1 is given to the smallest difference, 2 goes to the next smallest, so on. 3. Attach to each rank the sign of its difference. Thus, if a difference produces a negative value, the rank for that pair is negative. 4. Sum the positive ranks and sum the negative ranks. 5. T is the smaller of the absolute values of the two sums


Ensembles d'études connexes

Yoost EAQ Chapter 15, Nursing Infomatics

View Set

CPCS 482 (Artificial Intelligence)

View Set

Vocab/True False Questions From the HW

View Set

Mkrt 301 chap 3 Test and Quiz Questions

View Set