Final Exam HTH 320

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

nominal data (categorical)

Data which consists of names, labels, or unranked categories.

formula for test statistic in ANOVA

Fobs = MSBG /MSE We want the MSBG of the F ratio to be as large as possible because it represents the variance of the treatment/intervention.

how to find the total # of participants in a study

NXpXq

ordinal data

Ranked categories; the order means something, but the difference between the values does not

hypothesis testing for correlation and regression

Step 1: H0: p = 0 H1: p does not equal 0 Step 2: two-tailed test at a .05 level fo significance df for a correlation are n - 2 Step 3: The correlation coefficient r is the test statistic for the hypothesis test Step 4: reject or retain the null compare r value and r critical

Calculating a One-Way ANOVA

Step 1: Ho: σ2 = 0 (population means do not vary) H1: σ2 > 0 (population means do vary) Step 2: determine the dfBG and dfE to find the fcritical value Step 3: if the fobserved is greater than fcritical, reject the null hypothesis Step 4: APA summary F (dfBG, dfE) = ___, p > .05

T-tests vs ANOVA

T-tests are used to compare one or two groups means while ANOVA is used to compare two or more group means.

degrees of freedom for goodness of fit test

The df for each chi-square distribution are equal to the number of levels of the categorical variable (k) minus 1: df = k - 1(always use first column)

comparing between-subjects and within-subjects design

The within-subjects is associated with more power to detect an effect than the between-subjects because some of the error in the denominator is removed. The power of the one-way within-subjects ANOVA is largely based on the assumption that observing the same participants across groups will result in more consistent responding.

chi-square test for independence

a statistical procedure used to determine whether frequencies observed at the combination of levels of two categorical values are similar to frequencies expected • Determining if two nominal variables are related • Attempting to determine the extent to which two variables are related • Two categorical variables with any number of levels

source variation

any variation that can be measured in a study, two sources of variation for one-way ANOVA: between groups and within-groups (error)

linearity

assumption that the best way to describe a pattern of data is using a straight line

Homoscedasticity

assumption that there is an equal ("homo") variance or scatter ("scedasticity") of data points dispersed along the regression line

types of variation in a wothin-subjects ANOVA

between-subjects, within-subjects, and between-persons

between-persons variation

calculated and then subtracted from the error term in the denominator of the test statistic, this reduces the error term in the denominator, thereby increasing the power of the test

how to compute a post-hoc test

conduct test-statistics for each pair: Step 1: Mean 1 - Mean 3 Mean 1 - Mean 2 Mean 2 - Mean 3 Step 2: Tukey's HSD critical value and compares each mean to the critical value Step 3: APA, state if there is significance with any of the pairs

post hoc tests

conducted when an ANOVA is significant, only necessary when k > 2 because multiple comparisons are needed

between-subjects ANOVA post-hoc tests

conservative tests: Scheffe and Bonferroni procedure A conservative test has less power, which means you are less likely to reject the null hypothesis (alternatively: more likely to retain the null hypothesis). Fisher's LSD test: the most liberal test meaning you are less likely to reject the null hypothesis (alternatively: more likely to retain the null hypothesis) Tukey HSD post-hov test is the most common

frequency expected

count or frequency of participants in each category or at each level of the categorical variable, as determined by the proportion expected in each category - Multiply total sample size (N) times proportion expected in each category (p) to find the frequency expected in each category

a correlation is used to...

describe the pattern of data points for the values of two factors and determine whether the pattern observed in a sample is also present in the population from which the sample was selected

effect size for ANOVAs

eta-squared and omega-squared For between-subjects we can report eta-squared or omega-squared. Omega-squared is the more conservative estimate and eta-squared will always report as a bigger number.

interpretation of lines in main effects and interactions

flat parallel lines = no main effect parallel lines = main effect

what are the three sources of between-groups variation for a Two-Way ANOVA

main effect for factor A main effect for factor B the interaction of factors A and B

coefficient of determination (r^2)

mathematically equivalent to eta-squared and is used to measure the proportion of variance of one factor (Y) that can be explained by known values of a second factor (X)

how to find total # of cells in a factorial

nXp

what does k stand for?

number of groups

what does n stand for?

number of participants per group

what does N stand for?

number of total participants in a study

how many factors does a one-way ANOVA test?

one

alternative correlation coefficients

pearson: both factors are interval or ratio data spearman: both factors are ranked or ordinal data point-biserial: one factor is dichotomous (nominal data/unranked), and the other factor is continuous (interval or ratio data) phi: both factors are dichotomous (nominal)

two types of variables in linear regression

predictor variable: - or known variable (X)--the variable with values that are known and can be used to predict values of another variable criterion variable: - or to-be-predicted variable (Y)--the variable with unknown values that can be predicted or estimated, given known values of the predictor variable

pearson correlation coefficient formula

r = (SSXY/√SSxSSY) also means covariance of X and Y/variance of X and Y separately -- The value in the numerator reflects the extent to which values on the x-axis and y-axis vary together. -The extent to which values of X and Y vary independently, or separately, is placed in the denominator -the larger the covariance between X and Y, the stronger the Pearson's correlation is, want the numerator to be large and the denominator to be small

between-subjects design

refers to observing different participants one time in each group

within-subjects design

refers to observing the same participants in each group, also known as repeated measures

what type of graph represents a correlation?

scatter plot

degrees of freedom for ANOVA

split degrees of freedom into two parts: degrees of freedom between groups (dfBG: k-1) and degrees of freedom error (dfE: N-k) or within groups

multiple regression

statistical method that includes two or more predictor variables in the equation of a regression line to predict changes in a criterion variable

method of least squares

statistical procedure used to compute the slope (b) and y-intercept (a) of the best fitting straight line to a set of data points

goodness of fit test

statistical procedure used to test hypotheses about the discrepancy between the observed and expected frequencies for the levels of a single categorical variable or two categorical variables observed together. Indicated how well a set of observed frequencies fits with what was expected

chi-square test

statistical procedure used to test hypotheses about the discrepancy between the observed and expected frequencies in two or more nominal categories

Two-Way ANOVA

statistical procedure used to test hypotheses concerning the variance of groups created by combining the levels of two factors. This test is used when the variance in any one population is unknown

regression line

the best-fitting straight line to a set of data points. A best-fitting line is a line that minimizes the distance of all data points that fall from it

pearson correlation coefficient

used to measure the direction and strength of the linear relationship of two factors in which the data for both factors are measured on an interval or ratio scale of measurement

correlation of coefficient (r)

used to measure the strength and direction of the linear relationship, or correlation, between two factors, r values range from -1.0 to 1.0

strength of correlation

values closer to 1.0 indicate stronger correlations scores are more consistent the closer they fall to the regression line: -zero correlation means there is no linear pattern between two factors -perfect correlation occurs when each data point falls exactly on a straight line

between groups variation

variance of group means

within-groups (error) variation

variation attributed to error

degrees of freedom for independence test

• Each categorical variable is associated with df= (k - 1)( k-1) • As with the chi-square goodness-of-fit test, the df reflect the number of cells or categories that are free to vary in a frequency table

interpretation for an independence test

• If two categorical variables are independent, they are not related (null) • If two categorical variables are dependent, they are related or correlated (alternative)

Non-parametric tests

• Parametric tests have more power which is why they are used more often · to test hypotheses that do not make inferences about parameters in a population (no assumptions), · to test hypotheses about data that can have any type of distribution, and · to analyze data on a nominal or ordinal scale of measurement (DV is nominal or ordinal aka categorical)

measures of effect size for chi-squre

• Phi coefficient (for a 2X2 only) • Cramer's V- anything other than a 2X2

test statistics for chi-square tests

• The null hypothesis for the chi-square goodness-of-fit test is that the expected frequencies are correct • The alternative hypothesis is that the expected frequencies are not correct. • The larger the discrepancy between the observed and expected frequencies, the more likely we are to reject the null hypothesis.

Cramer's V

• When the levels of one or more categorical variables are greater than 2, we use Cramer's V or Cramer's phi to estimate effect size • df-smaller is the smaller of the two df

chi-square distribution

• positively skewed distribution of chi-square test statistic values for all possible samples when the null hypothesis is true

limits in interpretation (problems with correlation)

- Reverse causality is a problem that arises when the direction of causality between two factors can be in either direction. - A confound variable, or third variable, is an unanticipated variable that could be causing changes in one or more measured variables. Outliers: - An outlier is a score that falls substantially above or below most other scores in a data set. - Outliers can obscure the relationship between two factors by altering the direction and strength of an observed correlation.

Rules for power and within-subjects design

1. As SSBG increases, power increases 2. As SSE decreases, power increases 3. As MSE decreases, power increases


Kaugnay na mga set ng pag-aaral

Human Execution: No Such Thing Read Theory Answers

View Set

VNSG 1323: Chapter 23 Prep U Questions

View Set

The Cardiovascular and Respiratory Systems

View Set

Maternal/OB: Ch. 12 The Term Newborn

View Set

CH 15- AUDITING GOVERNMENTS AND NOT-FOR-PROFIT ORGANIZATIONS

View Set

Ch 24: Nursing Management of the Newborn at Risk: Acquired and Congenital Newborn Conditions

View Set

Health Assessment PrepU Chapter 02

View Set