Psyc Stats Exam 3 (Ch 12-15)

¡Supera tus tareas y exámenes ahora con Quizwiz!

least-squared-error solution

best-fitting rule with the smallest total squared error

A chi-square test for independence has df = 2. What is the total number of categories (cells in the matrix) that were used to classify individuals in the sample?

6

repeated-measures design

the same group is tested in all of the different treatment conditions

The chi-square test for goodness of fit evaluates ______.

the shape or proportions for a population distribution

Standard deviation

the square root of the variance

independent-measures designs

the study uses a separate sample for each of the different treatment conditions being compared

N

the total # of scores in the entire study

2 interpretations of Anova correspond to....

the two hypotheses (null and alternative)

Each person in a sample of 180 male college students is cross-classified in terms of his own political preference (Democrat, Republican, Other) and that of his father (Democrat, Republican, Other). The null hypothesis would state that

there is no relationship between the political preferences of fathers and sons

The *null* hypothesis for the chi-square test for independence states that ____.

there is no relationship between the two variables

The null hypothesis for the chi-square test for independence states that ____.

there is no relationship between the two variables

levels of the factor example; a study that examined performance under three different telephone conditions would have __________ levels of the factor.

three

slope

value which determines how much Y variable changes when X is increased by one point

Y-intercept

value which determines the value of Y when X = 0

factor

variable that designates the groups being compared -in the context of ANOVA, an independent variable or a quasi-independent variable -made up levels - ie. age group

quasi-independent variable

when a researcher uses a non-manipulated variable to designate groups -ex. the three groups in Figure 12.1 could represent six-year-old, eight-year-old, and ten-year-old children

Point biserial correlation: one variable is _____

dichotomous

Contingency table

displays data collected when there are two independent variables

linear relationship equation

equation expressed by the equation Y = bX + a

between degrees of freedom

number of groups minus 1

ratio

numerical, can be split or "broken" into multiple parts of sections, nominal and ordinal cannot

Goodness-of-Fit Equations variables

p ~ n ~ fo ~ fe ~ C ~

Goodness-of-Fit Test Equations

pn -> ∑(ƒ₀ − ƒe)²/ƒe -> C − 1

the X^2 distribution is

positively skewed

analysis of regression

process of testing the significance of a regression equation

dichotomous variable

quantity with only two values

Pearson correlation equations

r=SP/√SSxSSy SSx=∑X^2-(∑X)^2/n SSy= " same thing but substitute Y instead of X "

posttest

refers to measuring the dependent variable after changing the independent variable

The three F-ratios have the same basic structure

- Numerator measures treatment mean differences - Denominator measures treatment mean differences when there is no treatment effect

Find the estimated standard error for the sample mean for each of the following samples:a. n = 4 with SS = 48;b. n = 6 with SS = 270;

-First calculate sample variance (s2) which is SS/df, then calculate estimated standard error [sM = √(s2/n)]a) s2 = 16 with estimated SE = 2b) s2 = 54 with estimated SE = 3

HOW TO: Hypothesis Testing with a Two-Factor ANOVA

1. State the hypotheses and select an alpha level. 2. Locate critical region 3. compute f-ratios 4. make decision

Using a sample of 120 creative artists, you wish to test the null hypothesis that creative artists are equally likely to have been born under any of the twelve astrological signs. Assuming that the twelve astrological signs each contain an equal number of calendar days, the expected frequency for each category equals

10

Refer to the two-variable chi-square study shown below. The entries in the table represent observed frequencies. Men Women Prefer Coke 8 2 Prefer Pepsi 4 6 What is the expected frequency in the Women-Prefer Pepsi cell?

4

An analysis of variance produces SStotal = 80 and SSwithin = 30. For this analysis, what is SSbetween?

50, because SSbetween=SStotal-SSwithin

A soda manufacturer wishes to test the hypothesis that more people like his brand than Brand X. The manufacturer obtains a sample of 100 people and finds that 53 people prefer his brand and 47 prefer Brand X. In this study, chi square is equal to:

9/50 + 9/50

Under what circumstance is a t statistic used instead of a z-score for a hypothesis test?

A t statistic is used instead of a z-score when the population standard deviation and variance are not know.

advantage of ANOVA over t-tests

Advantage #1: t tests are limited to situations w/ only 2 treatments to compare ANOVA major advantage #2: useful for comparing 2 or more treatments -provides researchers w/ much greater flexibility in designing experiments & interpreting results

What does ANOVA stand for?

Analysis of variance

Which of the following statements is true for X^2?

As the number of categories increases the critical value of X^2 increases

When conducting a chi-square test it is appropriate to

Compare actual probabilities that could have been developed by chance When conducting a chi-square test it is appropriate to

Which of the following assumptions must be satisfied in order for chi square to give accurate results? A. None of the expected frequencies should be less than 2 B. none of the above C. No response should be related to or dependent on any other response D. A participant must fall in only one category E. All of the above

E. All of the above

F

F-ratio

How does sample size influence the outcome of a hypothesis test and measures of effect size? How does the standard deviation influence the outcome of a hypothesis test and measures of effect size?

Increase sample size increases the likelihood of rejecting the null hypothesis but has little or no effect on measures of effect size. Increasing the sample variance reduces the likelihood of rejecting the null hypothesis and reduces measures of effect size.

A researcher suspects that vegetarians prefer Pepsi and meat-eaters prefer Coke. The researcher obtains a sample of 80 people, and the results are: 30 vegetarians prefer Pepsi, 10 vegetarians prefer Coke; 5 meat-eaters prefer Pepsi, and 35 meat-eaters prefer Coke. If chi square is statistically significant using the .05 criterion, what should the researcher decide?

Meat-eaters prefer Coke, whereas vegetarians prefer Pepsi.

What does it mean to obtain a negative value for the chi-square statistic?

The chi-square statistic can never be negative

two sample t-test (aka independent measures) variance Pooled variance equation

Pooled variance= SS1+SS2/df1+df2

variance equations/symbols

Population (N) Sample variance (N-1)

to convert SD to SS

SD=√(SS/n)

nondirectional

Since all discrepancies between observed and expected frequencies are squared, the chi-square test is...

chi square test for independence

The Chi-Square Test of Independence determines whether there is an association between categorical variables (i.e., whether the variables are independent or related). It is a nonparametric test

The data that you collect suggest that the between-treatments variance is small, relative to the within-treatment variance, so the F-ratio for your study is likely to be (close to 1.00/ substantially larger than 1.00), suggesting that (the null hypothesis will be rejected/will not be rejected)

The data that you collect suggest that the between-treatments variance is small, relative to the within-treatment variance, so the F-ratio for your study is likely to be close to 1.00, suggesting that will not be rejected

T/F ANOVA allows researchers to compare several treatment conditions without conducting several hypothesis tests.

True

T/F All ANOVAs are non-directional

True

T/F MSbetween is variance that includes both treatment effects and random chance

True

If there is no systematic treatment effect, then what value is expected, on average, for the F-ratio is an ANOVA?

When Ho is true, the expected value for the F-ratio is 1.00 because the top and bottom of the ratio are both measuring the same variance.

The observed frequencies are different than the expected frequencies

When X^2 (observed) exceeds X^2 (critical) one can conclude

independent variable

When a researcher manipulates a variable to create the treatment conditions in an experiment

levels

aka "*levels* of the factor" the individual groups or treatment conditions that are used to make up a factor

Stages of repeated-measures analysis (stage 1)

analysis of the SS and df into within-treatments and between-treatments components

as sample size (n) increases what happens to normal distribution?

as sample size (n) increases, distribution becomes more normal and standard deviation becomes smaller

chi square test

assesses whether the observed frequencies differ significantly from expected frequencies

Slope of a line, b, can be calculated...

b = SP/SSx OR b = r sy/sx the line goes through (Mx,My) therefore a=My-bMx - this regression line equation results in the least squared error between the data points and the line

A large discrepancy between 𝒇e and 𝒇o for a chi-square test means...(select all that apply) a. The value for the chi-square statistic will be smaller -IT WILL BE LARGER b. The probability of rejecting the null hypothesis increases c. The probability that the data do not fit the null hypothesis increases d. All of the above

b. The probability of rejecting the null hypothesis increases c. The probability that the data do not fit the null hypothesis increases

The following scores were measured on an interval scale. If the scores are converted to an ordinal scale (ranks), what values should be assigned to the two individuals who started with scores of X = 5? Scores: 2, 3, 5, 5, 7, 10, 18

both should receive a rank of 3.5

cell

box within the table showing different combinations of the variables

The expected frequencies in a chi-square test ________.

can contain decimal values (or fractions)

expected frequencies ___.

can contain fractions or decimals

nominal

categorical or variables grouped by names

The chi-square test is designed for use when observations are classified according to...

categories

between-treatments variance

caused by either systematic age group variance or random unsystematic variance due to individual differences and sampling error

within-treatments variance

caused by random unsystematic variance caused by individual differences or sampling error

Stages of repeated-measures analysis (stage 3)

computation of variances and the F-ratio

monotonic relationship

consistently one-directional relationship between two variables

Phi correlation: both variables are _____

dichotomous

residual variance

denominator of the F-ratio -variance (differences) expected with no treatment effect

covariability

describes how much scores are spread out or differ from one another, but takes into account how similar these differences or changes are for each subject from one variable to the other

independence

descriptor for the three hypothesis tests, meaning results from each test are totally unrelated

goal of ANOVA in this experiment

determine whether the mean differences observed among the samples provide enough evidence to conclude that there are mean differences among the three populations

Percentage of variance accounted for

determining amount of variability in scores explained by treatment effect is an alternative method for measuring effect size r^2=t^2/t^2+df

correlation matrix

diagram of results from multiple relationships -shows the correlation coefficients between several variables

When will only half of a correlation matrix be displayed?

if correlation matrix is perfectly symmetrical

calculating estimated standard error equation (type B)

if sample sizes are different, calculate pooled variance sp^2=SS1+SS2/df1+df2

Which Chi-Square tests hypotheses about the relationship between two variables in a population?

independence

single factor, independent-measures design (ANOVA)

limited to one independent variable (IV) or one quasi-independent variable -the study uses a separate sample for each of the different treatment conditions being compared

regression equation for Y

linear equation

main effect

mean difference among the levels of one factor

descriptive statistic

mean, standard deviation, and number of scores for each treatment

Cohen's w

measure of effect size that can be used for both chi-square tests

Stages of repeated-measures analysis (stage 2)

measure of individual differences and removal of individual differences from the denominator of the F-ratio

Can chi square be a negative number?

no

dichotomous definition

nominal variables which have only two categories or levels -ex. if we were looking at gender, we would most probably categorize somebody as either "male" or "female"

Spearman Correlation

relationship between two variables when both are measured in ordinal scales

phi-coefficient

relationship between two variables when both measured for each individual are dichotomous

point-biseral correlation

relationship between two variables, one consisting of regular scores and the second having *two* values -"bi-" = 2

positive correlation

relationship in which two variables tend to change in the same direction

perfect correlation

relationship of 1.00, indicating an exactly consistent relationship

factorial design

research study involving more than one factor

testwise alpha level

risk of a Type I error for an individual hypothesis test

single sample t-test variance equation

s^2=SS/df

Pearson chi-square

set of data including calculated chi-square value, degrees of freedom, and level of significance

Alternatives to pearson correlation

spearman correlation, point-biseral correlation, and phi-coefficient

3 Stages of repeated-measures analysis

stage 1: analysis of the SS and df into within-treatments and between-treatments components stage 2: measure of individual differences and removal of individual differences from the denominator of the F-ratio stage 3: computation of variances and the F-ratio

Variance

standard deviation squared

T-statistic and equations

t= and estimated standard error (SM) ALSO, Standard deviation equation: s^2

matrix

table showing different combinations of the variables, producing different conditions

parametric test

test that concerns parameters and requires assumptions about parameters

as the number of categories (C) increases,

the Critical value of the chi-square statistic for a one-way test increases

The critical value of the chi-square statistic for a one-way test increases as:

the number of categories (C) increases

Within degrees of freedom

∑(n-1)=∑df in each treatment The sum of # of scores in each treatment minus one equals the sum of the degrees of freedom within each treatment

interaction

"extra" mean difference

A x B interaction

"extra" mean difference not accounted for by the main effects of the two factors

n

# of scores in each treatment

k

# of treatment conditions

Analysis of Variance

(ANOVA) statistical hypothesis-testing procedure used to evaluate mean differences between two or more sample means or populations - determines whether three or more means are statistically different from one another

Cohen's d (equation)

(effect size)

F ratio

***

how is f-ratio related to the t-statistic?

***

inferential statistic

***

typical research situation in which ANOVA would be used

***

t-test uses ___ standard error

*estimated* standard error

Overview of Two-Factor, Independent-measures ANOVA

- We can examine three types of mean differences within one analysis Complex analysis of variance - Evaluates differences among two or more sample means - Two independent variables are manipulated(factorial ANOVA; only two-factor in textbook) - Both independent variables and quasi-independent variables may be employed as factors in a two-factor ANOVA - An independent variable (factor) is manipulated in an experiment - A quasi-independent variable (factor) is not manipulated but defines the groups of scores in a nonexperimental study - *factorial designs* -*three hypotheses tested by three F-ratios*

how to identify hypotheses for ANOVAs (instead of t-tests), and what does this do?

...... *** -it evaluates mean differences

What is the implication when an ANOVA produces a very large value for the F-ratio?

A large F-ratio indicates there is a treatment effect because the differences between the numerator are much bigger than the differences expected if there was no treatment effect (denominator) - between groups differences are significantly different.

Q3. Explain why t distributions tend to be flatter and more spread out than the normal distribution (z-score distribution).

A z-score is used when the population standard deviation (or variance) is known. The t statistic is used when the population variance or standard deviation is unknown. The t statistic uses the sample variance or sample standard deviation in place of the unknown population values.

To evaluate the effect of a treatment, a sample (n = 16) is obtained from a population with a mean of µ = 30 and a treatment is administered to the individuals in the sample. After treatment, the sample mean is found to be M = 31.3 with a standard deviation of s = 3.Are the data sufficient to conclude that the treatment has a significant effect using a two-tailed test with α = .05?

CR = 2.131 SM = 0.75 t = 1.733 Fail to reject the null; t(15) = 1.733; p > .05The data are not sufficient, there is no treatment effect.

Two-factor, Independent-measures ANOVA

Complex analysis of variance - Evaluates differences among two or more sample means - Two independent variables are manipulated(factorial ANOVA; only two-factor in textbook) - Both independent variables and quasi-independent variables may be employed as factors in a two-factor ANOVA - An independent variable (factor) is manipulated in an experiment - A quasi-independent variable (factor) is not manipulated but defines the groups of scores in a nonexperimental study - *factorial designs* -*three hypotheses tested by three F-ratios*

Factorial designs

Consider more than one factor - We will study two-factor designs only - Also limited to situations with equal n's in each group Combined impact of factors is considered

T/F A positive value for the chi-square statistic indicates a positive relationship between the two variables, and a negative value for the chi-square statistic indicates a negative relationship between the two variables.

False, Chi-square cannot be a negative number, so it cannot accurately show the direction of the relationship between the two variables

T/F If the null hypothesis is true, the F-ratio for ANOVA is expected (on average) to have a value of 0.

False, If the null hypothesis is true, the F-ratio will have a value near 1.00

T/F A large value for chi-square will tend to the statistical decision to retain (i.e., fail to reject) the null hypothesis.

False, Large values of chi-square indicate that observed frequencies differ a lot from null hypothesis predictions

T/F Posttests are needed if the decision from an analysis of variance is "fail to reject the null hypothesis."

False, Post hoc tests are needed only if you reject H0 (indicating that at least one mean difference is significant)

True or False: If the null hypothesis is true, the F-ratio for ANOVA is expected (on average) to have a value of 0.

False: If the null hypothesis is true, the F-ratio will have a value near 1.00.

True or False: Sample size has a great influence on measures of effect size.

False: Measures of effect size are not influenced to any great extent by sample size.

True of False: Post tests are needed if the decision from an analysis of variance is to fail to reject the null hypothesis.

False: Post hoc tests are only needed when at least one mean difference is significant

True or False: If the Y variable decreases when the X variable decreases, their correlation is negative

False: The variables change in the same direction, a positive correlation

True or False: When the value of the t statistic is near 0, the null hypothesis should be rejected.

False: When the value of t is near 0, the difference between M and μ is also near 0.

When n is small (less than 30), the t distribution ____.

Is flatter and more spread out than the normal z distribution

Three hypotheses tested by three F-ratios

Large F-ratio 🡪 greater treatment differences than would be expected with no treatment effects

Which combination of factors is most likely to produce a large value for the F-ratio?

Large mean differences and small sample variances

a. H0: The racial/ethnic distribution of police-involved shooting deaths will match the racial/ethnic distribution of the US population. b. H1: The racial/ethnic distribution of police-involved shooting deaths will not match the racial/ethnic distribution of the US population. c. df = 3 d. Critical chi-squared value = 7.81

Let's say we're interested in whether police-involved shootings are disproportionately distributed compared to national data on race/ethnicity. We know that the 2019 population distribution is as follows: Number of Individuals in Various Racial/Ethnic Groups of the US Population: White 198 million Black 44 million Hispanic/Latinx 60 million Other 26 million Shooting Deaths by Police in 2019, based on Victim's Race/Ethnicity: White 404 Black 250 Hispanic/Latinx 163 Other/Unknown 184 Total 1001 Data come from the US Census Bureau and The Washington Post's Fatal Force project. Population groups are rounded to the nearest 1 million. Using a standard alpha of 0.05, select the attributes that best/most appropriately describe this study's: -null hypothesis -alternative hypothesis -degrees of freedom -critical chi-squared value

A sample of n = 25 scores has a mean of M = 83 and a standard deviation of s = 15. Compute the estimated standard error for the sample mean and explain what is measured by the estimated standard error.

The estimated standard error is 3 points [sM = √(s2/n) or s/√n]. The estimated standard error provides an estimate of the average distance between a sample mean and the population mean, or how well our sample mean represents our population mean (It does what standard error does for z-score tests but now we don't know pop SD which is needed to calculate SE).

Goodness-of-fit

The extent to which observed frequencies in a chi-square test match the expected frequencies

Which of the following is not an assumption of the chi-square test?

The observations are measured on a continuous measurement scale

*Alternative* hypothesis for non-directional; chi-square tests

The observed distribution of frequencies does not equal the expected distribution of frequencies for each category

3 hypotheses of two-factor anova

The two-factor ANOVA is composed of three distinct hypothesis tests: 1. The main effect of factor A (often called the A-effect). Assuming that factor A is used to define the rows of the matrix, the main effect of factor A evaluates the mean differences between rows. 2. The main effect of factor B (called the B-effect). Assuming that factor B is used to define the columns of the matrix, the main effect of factor B evaluates the mean differences between columns. 3. The interaction (called the AxB interaction). The interaction evaluates mean differences between treatment conditions that are not predicted from the overall main effects from factor A or factor B.

T/F For a fixed alpha level of significance used for a hypothesis test, the critical value for a chi-square statistic increases as the degrees of freedom increase.

True

T/F In a chi-square test, the observed frequencies are always whole numbers.

True, Observed frequencies are just frequency counts, so there can be no fractional values

True or False: By chance, two samples selected from the same population have the same size (n = 36) and the same mean (M = 83). They will also have the same t statistic.

True: All elements of the t statistic will be the same for each sample

True or False: A report shows ANOVA results: F(2, 27) = 5.36, p < .05. You can conclude that the study used a total of 30 participants.

True: Because dfwithin = N - k

True or False: ANOVA allows researchers to compare several treatment conditions without conducting several hypothesis tests.

True: Several conditions can be compared in one test.

True or False: It is possible for the regression equation to have none of the actual data points on the regression line.

True: The line is an estimator.

True or False: Compared to a z-score, a hypothesis test with a t statistic requires less information about the population

True: The t statistic does not require the population standard deviation; the z-test does.

True or False: If r = 0.58, the linear regression equation predicts about one third of the variance in the Y scores.

True: When r = .58, r^2 = .336

T/F A report shows ANOVA results: F(2, 27) = 5.36, p < .05. You can conclude that the study used a total of 30 participants.

True; Remember that dftotal = N - 1 and dftotal = dfbetween + dfwithin So: dftotal = dfbetween + dfwithin = 2 + 27 = 29 And since dftotal = N - 1, then 29 = N - 1 So N = 30

Let's say that we are interested in the most effective way of soothing anxious dogs (like Rocky, our course TA). We design a study where 18 dogs have their blood pressure tested 4 times, under 4 different conditions. (Blood pressure increases under stressful conditions and decreases during relaxation). Condition 1 = Control condition (experimenters sit quietly, not interacting with the dog as he rests for 5 minutes before measurement). Condition 2 = Head pats (the experimenter pats the dog's head for 5 minutes). Condition 3 = Belly rubs (experimenter rubs the dog's belly for 5 minutes). Condition 4 = Verbal positive reinforcement (experimenter spends 5 minutes telling the dog nice things, such as "Who's a good boy? You're a good boy!"). •Does this experiment call for a one-factor ANOVA or a two-factor ANOVA? •Is this a repeated-measures ANOVA, or independent-samples? •How many levels of the factor are there? One-factor ("one way") ANOVA Two-factor ("two-way") ANOVA repeated measures ANOVA independent samples ANOVA k = 2 k = 3 k = 4

Two-factor ("two-way") ANOVA ........... repeated measures ANOVA ........... k = 4

How tho check calculations for the Mann-Whitney U

UA + UB = (nA)(nB)

If there is no systematic treatment effect, then what value is expected, on average, for the F-ratio in an ANOVA?

When Ho is true the expected value for F = 1.00 because the numerator and denominator are both measuring only non-sys differences (expected error) - thus NO treatment or conditional effects are present

Under what conditions can the phi-coefficient be used to measure effect size for a chi-square test for independence?

When both variables consist of exactly two categories

Compare actual probabilities that could have been developed by chance

When conducting a chi-square test it is appropriate to

regression equations

Y=bX+a - this regression line equation results in the least squared error between the data points and the line

ordinal

[think "ordered"] happens in a specific, sometimes repeated way (can be qualitative or quantitative)

Statistical procedures that deal with ordinal statistics are generally not valid when:

a large proportion of the cases are tied

independent-measures design

a separate group of participants (sample) for each of the different treatment conditions being compared

The Pearson correlation coefficient is not applicable in all situations(i.e. when we have outliers or a restriction of range), so statisticians have created alternatives to Pearson. Match the Pearson alternative to the situation in which it would be used. Write the letter a, b, or c next to the corresponding situation a. Spearman Correlation __ 2 dichotomous variables b. Point-biserial Correlation __ 1 dichotomous variable & 1 variable is interval or ratio c. Phi-Coefficient __ ordinal variable(s) and curvilinear relationship

a. Spearman Correlation _c_ 2 dichotomous variables b. Point-biserial Correlation _b_ 1 dichotomous variable & 1 variable is interval or ratio c. Phi-Coefficient _a_ ordinal variable(s) and curvilinear relationship

Which of the following are true of post-hoc tests? (Select all that apply) a. They are done after a significant difference is found in an ANOVA b. They only apply to one-way ANOVAs c. Tukey's and Scheffe are the most common examples d. They help you locate where the significant difference is e. They measure effect size

a. They are done after a significant difference is found in an ANOVA b. They only apply to one-way ANOVAs c. Tukey's and Scheffe are the most common examples d. They help you locate where the significant difference is e. They measure effect size

negative correlation

correlation in which two variables tend to go in opposite directions

linear equations/regression

correlations give rise to a line of best fit -regressions analyze the relationship this line reflects (IDs the equation of that line)

Partial correlation

evaluating relationship b/w X and Y while controlling variance caused by Z

pn

expected frequencies for the goodness-of-fit test

outlier

extreme datum point

In analysis of variance, the variable (independent or quasi-independent) that designates the groups being compared is called a __________.

factor

two-factor design is also known as....

factorial design***???

how to compute SP (sum of products) and formulas

find the products of X times Y

SStotal-SSwithin

formula for between treatments sum of squares

k-1

formula for between-treatments degrees of freedom

N-1

formula for total degrees of freedom

∑X² − G²/N

formula for total sum of squares

∑(n-1) = ∑df in each treatment

formula for within-treatments degrees of freedom

∑SS inside each treatment

formula for within-treatments sum of squares

Which Chi-Square test uses frequency data from a sample to test hypotheses about a population?

goodness of fit

linear relationship

indicator of how well the data points fit a straight rule

single-factor study

indicator that the research study involves only one independent variable

level(s) of the factor (in ANOVAs)

individual condition or value that makes up a variable/factor -ie. if age group is the factor, 16- to 20-year-olds, 21- to 25-year-olds, and ≥ 26-year-olds are the levels of the factor.

The value of chi square can never be...

less than zero

coefficient of determination

measure of proportion of variability in one variable determined from the relationship with another variable

standard error of estimate

measure of standard distance between predicted Y values on regression line and actual Y values

sum of products of deviations

measure of the amount of covariability between two variables

Pearson correlation

measure of the degree and the direction of the linear relationship between two variables - has a value between -1 and 1 where: • -1 indicates perfectly negative linear correlation b/w two variables • 0 indicates no linear correlation between two variables • 1 indicates perfectly positive linear correlation between two variables

between-subjects variance

measure of the size of the individual differences

Spearman correlation

measures relationship b/w two variables that are both measured on an ordinal scale (Both X and Y values are ranks) -it measures the degree of consistency of direction for the relationship but does *not* require that the points cluster around a straight line -to compute: Pearson formula is applied to ordinal data

Cramér's V

modification of the phi-coefficient that can be used to measure effect size

restricted range

set of scores that do not represent the full range of possible values

variance equals

standard deviation squared

regression

statistical technique for finding the best-fitting straight rule for a set of data

correlation

statistical technique used to measure and describe the relationship between two variables

repeated-measures ANOVA

strategy in which the same group of individuals participates in every treatment

two-factor ANOVA

strategy where multiple variables (or mean differences) are manipulated while a third variable is observed -test for mean differences in research studies -the two-factor ANOVA allows us to examine three types of mean differences within one analysis. In particular, we conduct three separate hypothesis tests for the same data, with a separate F-ratio for each test

two-factor design

study that combines two variables -the sample size is the same for all treatment conditions

single-factor design

study that has only one independent variable

two-factor, independent-measures, equal n design

study with exactly two factors that uses a separate sample for each treatment condition

G

sum of all of the scores in the research study

T

sum of the scores for each treatment conditino

f-ratio equation and definition

the numerator of the F-ratio measures the actual mean differences in the data, and the denominator measures the differences that would be expected if there is no treatment effect -large value for the F-ratio indicates that the sample mean differences are greater than would be expected by chance alone, and therefore provides evidence of a treatment effect -To determine whether the obtained F-ratios are significant, we need to compare each F-ratio with the critical values found in the F-distribution table in Appendix B.

For the MANN-WHITNEY U test, when n is greater than 20, U is converted to a z score

true


Conjuntos de estudio relacionados

PRO RES - The Client-Lawyer Relationship

View Set

THINGS TO LEARN TODAY: MYELIN SHEATH AND ACTION POTENTIAL

View Set

Chapter 15: The War of the Union 1861-1865

View Set

Chapter 3 - Life Insurance Policies

View Set

AI-900 - 01-31-2024 - AI Overview - Fundamental AI Concepts

View Set

•Ch-1 (System Safety: An Overview) and •Ch-2 (System Safety Concepts)

View Set